# Imports

## List imports

> Returns a list of all imports configured in the account.\
> If no imports exist in the account, a 204 response with no body will be returned.<br>

````json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"parameters":{"Include":{"name":"include","in":"query","required":false,"description":"Comma-separated list of fields to project into each returned record.\nTriggers **summary projection** on supported list endpoints: the server\nreturns a minimal identity set for each record (`_id`, `name`, plus a\nresource-specific always-on set like `adaptorType` on exports/imports,\nor richer defaults on `ashares`, `audit`, `httpconnectors`, `transfers`,\netc.) and adds any listed fields that exist on the record. Listed fields\nthe record doesn't carry are silently dropped.\n\nDot notation is supported for projecting nested sub-fields — e.g.\n`include=ftp.directoryPath` on `/v1/exports` returns just that nested\nfield inside `ftp` for FTP-type exports (and omits `ftp` entirely for\nnon-FTP exports).\n\nRules:\n- Value regex is `{a-z A-Z . _}` (letters, dots, underscores) plus the\n  comma separator; digits are also accepted in practice. Any other\n  character returns **400 `invalid_query_params`**.\n- Empty value (`include=`) or bare `include` is ignored — the full\n  default record is returned.\n- `include` and `exclude` are **mutually exclusive**. Passing both\n  returns **400 `invalid_query_params`**: *\"Please provide either\n  include or exclude param in the request query and not both.\"*\n- Array-bracket syntax (`include[]=...`) is not supported and can return\n  a 500.\n- Only list endpoints honor projection — on GET-by-id the parameter is\n  silently ignored.\n- A small set of list endpoints explicitly reject both `include` and\n  `exclude` with **400 `invalid_query_params`** and a message of the form\n  *\"Include or exclude query params are not applicable for `<resource>`\n  resource.\"* Known rejections: `/v1/ediprofiles`, `/v1/environments`,\n  `/v1/iClients`, `/v1/lookupcaches`, `/v1/tags`.","schema":{"type":"string"}},"Exclude":{"name":"exclude","in":"query","required":false,"description":"Comma-separated list of fields to remove from the default response on\nsupported list endpoints. Unlike `include`, `exclude` does NOT trigger\nsummary projection — callers get the standard full-record shape with the\nnamed fields stripped out.\n\nRules:\n- Value regex is `{a-z A-Z . _}` (letters, dots, underscores) plus the\n  comma separator; digits are also accepted in practice. Any other\n  character returns **400 `invalid_query_params`**.\n- Empty value (`exclude=`) is ignored.\n- Certain protected identity fields **cannot be stripped** — e.g.\n  `exclude=name` on `/v1/exports` is silently ignored and `name` remains\n  in the response. Protected sets vary per resource.\n- `include` and `exclude` are **mutually exclusive**. Passing both\n  returns **400 `invalid_query_params`**: *\"Please provide either\n  include or exclude param in the request query and not both.\"*\n- Only list endpoints honor stripping — on GET-by-id the parameter is\n  silently ignored.\n- A small set of list endpoints explicitly reject both `include` and\n  `exclude` with **400 `invalid_query_params`** and a message of the form\n  *\"Include or exclude query params are not applicable for `<resource>`\n  resource.\"* Known rejections: `/v1/ediprofiles`, `/v1/environments`,\n  `/v1/iClients`, `/v1/lookupcaches`, `/v1/tags`.","schema":{"type":"string"}}},"schemas":{"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this import."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this import was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this import."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this import is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this import expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this import."}}}]},"Request":{"type":"object","description":"Configuration for import","properties":{"_connectionId":{"type":"string","format":"objectId","description":"A unique identifier representing the specific connection instance within the system. This ID is used to track, manage, and reference the connection throughout its lifecycle, ensuring accurate routing and association of messages or actions to the correct connection context.\n\n**Field behavior**\n- Uniquely identifies a single connection session.\n- Remains consistent for the duration of the connection.\n- Used to correlate requests, responses, and events related to the connection.\n- Typically generated by the system upon connection establishment.\n\n**Implementation guidance**\n- Ensure the connection ID is globally unique within the scope of the application.\n- Use a format that supports easy storage and retrieval, such as UUID or a similar unique string.\n- Avoid exposing sensitive information within the connection ID.\n- Validate the connection ID format on input to prevent injection or misuse.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"conn-9876543210\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- The connection ID should not be reused for different connections.\n- It is critical for maintaining session integrity and security.\n- Should be treated as an opaque value by clients; its internal structure should not be assumed.\n\n**Dependency chain**\n- Generated during connection initialization.\n- Used by session management and message routing components.\n- Referenced in logs, audits, and debugging tools.\n\n**Technical details**\n- Typically implemented as a string data type.\n- May be generated using UUID libraries or custom algorithms.\n- Stored in connection metadata and passed in API headers or payloads as needed."},"_integrationId":{"type":"string","format":"objectId","description":"The unique identifier for the integration instance associated with the current operation or resource. This ID is used to reference and manage the specific integration within the system, ensuring that actions and data are correctly linked to the appropriate integration context.\n\n**Field behavior**\n- Uniquely identifies an integration instance.\n- Used to associate requests or resources with a specific integration.\n- Typically immutable once set to maintain consistent linkage.\n- May be required for operations involving integration-specific data or actions.\n\n**Implementation guidance**\n- Ensure the ID is generated in a globally unique manner to avoid collisions.\n- Validate the format and existence of the integration ID before processing requests.\n- Use this ID to fetch or update integration-related configurations or data.\n- Secure the ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"intg_1234567890abcdef\"\n- \"integration-987654321\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- This ID should not be confused with user IDs or other resource identifiers.\n- It is critical for maintaining the integrity of integration-related workflows.\n- Changes to this ID after creation can lead to inconsistencies or errors.\n\n**Dependency chain**\n- Dependent on the integration creation or registration process.\n- Used by downstream services or components that interact with the integration.\n- May be referenced in logs, audit trails, and monitoring systems.\n\n**Technical details**\n- Typically represented as a string.\n- May follow a specific pattern or prefix to denote integration IDs.\n- Stored in databases or configuration files linked to integration metadata.\n- Used as a key in API calls, database queries, and internal processing logic."},"_connectorId":{"type":"string","format":"objectId","description":"The unique identifier for the connector associated with the resource or operation. This ID is used to reference and manage the specific connector within the system, enabling integration and communication between different components or services.\n\n**Field behavior**\n- Uniquely identifies a connector instance.\n- Used to link resources or operations to a specific connector.\n- Typically immutable once assigned.\n- Required for operations that involve connector-specific actions.\n\n**Implementation guidance**\n- Ensure the connector ID is globally unique within the system.\n- Validate the format and existence of the connector ID before processing.\n- Use consistent naming or ID conventions to facilitate management.\n- Secure the connector ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"conn-12345\"\n- \"connector_abcde\"\n- \"uuid-550e8400-e29b-41d4-a716-446655440000\"\n\n**Important notes**\n- The connector ID must correspond to a valid and active connector.\n- Changes to the connector ID may affect linked resources or workflows.\n- Connector IDs are often generated by the system and should not be manually altered.\n\n**Dependency chain**\n- Dependent on the existence of a connector registry or management system.\n- Used by components that require connector-specific configuration or data.\n- May be referenced by authentication, routing, or integration modules.\n\n**Technical details**\n- Typically represented as a string.\n- May follow UUID, GUID, or custom ID formats.\n- Stored and transmitted as part of API requests and responses.\n- Should be indexed in databases for efficient lookup."},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this import's operations.\n\nThis field determines:\n- Which connection types are compatible with this import\n- Which API endpoints and protocols will be used\n- Which import-specific configuration objects must be provided\n- The available features and capabilities of the import\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPImport\" for generic REST/SOAP APIs\n- \"SalesforceImport\" for Salesforce-specific operations\n- \"NetSuiteDistributedImport\" for NetSuite-specific operations\n- \"FTPImport\" for file transfers via FTP/SFTP\n\nWhen creating an import, this field must be set correctly and cannot be changed afterward\nwithout creating a new import resource.\n\n**Important notes**\n- When using a specific adapter type (e.g., \"SalesforceImport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n- HTTPImport is the default adapter type for all REST/SOAP APIs and should be used unless there is a specialized adapter type for the specific API (such as SalesforceImport for Salesforce and NetSuiteDistributedImport for NetSuite).\n- WrapperImport is used for using a custom adapter built outside of Celigo, and is vary rarely used.\n- RDBMSImport is used for database imports that are built into Celigo such as SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, etc. When using RDBMSImport, populate the \"rdbms\" configuration object.\n- JDBCImport is ONLY for generic JDBC connections that are not one of the built-in RDBMS types. Do NOT use JDBCImport for Snowflake, SQL Server, MySQL, PostgreSQL, or Oracle — use RDBMSImport instead. When using JDBCImport, populate the \"jdbc\" configuration object.\n- NetSuiteDistributedImport is the default for all NetSuite imports. Always use NetSuiteDistributedImport when importing into NetSuite.\n- NetSuiteImport is a legacy adaptor for NetSuite accounts without the Celigo SuiteApp 2.0. Do NOT use NetSuiteImport unless explicitly requested.\n\nThe connection type must be compatible with the adaptorType specified for this import.\nFor example, if adaptorType is \"HTTPImport\", _connectionId must reference a connection with type \"http\".\n","enum":["HTTPImport","FTPImport","AS2Import","S3Import","NetSuiteImport","NetSuiteDistributedImport","SalesforceImport","JDBCImport","RDBMSImport","MongodbImport","DynamodbImport","WrapperImport","AiAgentImport","GuardrailImport","FileSystemImport","ToolImport"]},"externalId":{"type":"string","description":"A unique identifier assigned to an entity by an external system or source, used to reference or correlate the entity across different platforms or services. This ID facilitates integration, synchronization, and data exchange between systems by providing a consistent reference point.\n\n**Field behavior**\n- Must be unique within the context of the external system.\n- Typically immutable once assigned to ensure consistent referencing.\n- Used to link or map records between internal and external systems.\n- May be optional or required depending on integration needs.\n\n**Implementation guidance**\n- Ensure the externalId format aligns with the external system’s specifications.\n- Validate uniqueness to prevent conflicts or data mismatches.\n- Store and transmit the externalId securely to maintain data integrity.\n- Consider indexing this field for efficient lookup and synchronization.\n\n**Examples**\n- \"12345-abcde-67890\"\n- \"EXT-2023-0001\"\n- \"user_987654321\"\n- \"SKU-XYZ-1001\"\n\n**Important notes**\n- The externalId is distinct from internal system identifiers.\n- Changes to the externalId can disrupt data synchronization.\n- Should be treated as a string to accommodate various formats.\n- May include alphanumeric characters, dashes, or underscores.\n\n**Dependency chain**\n- Often linked to integration or synchronization modules.\n- May depend on external system availability and consistency.\n- Used in API calls that involve cross-system data referencing.\n\n**Technical details**\n- Data type: string.\n- Maximum length may vary based on external system constraints.\n- Should support UTF-8 encoding to handle diverse character sets.\n- May require validation against a specific pattern or format."},"as2":{"$ref":"#/components/schemas/As2"},"dynamodb":{"$ref":"#/components/schemas/Dynamodb"},"http":{"$ref":"#/components/schemas/Http"},"ftp":{"$ref":"#/components/schemas/Ftp"},"jdbc":{"$ref":"#/components/schemas/Jdbc"},"mongodb":{"$ref":"#/components/schemas/Mongodb"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"netsuite_da":{"$ref":"#/components/schemas/NetsuiteDistributed"},"rdbms":{"$ref":"#/components/schemas/Rdbms"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"file":{"$ref":"#/components/schemas/File"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"guardrail":{"$ref":"#/components/schemas/Guardrail"},"name":{"type":"string","description":"The name property represents the identifier or title assigned to an entity, object, or resource within the system. It is typically a human-readable string that uniquely distinguishes the item from others in the same context. This property is essential for referencing, searching, and displaying the entity in user interfaces and API responses.\n\n**Field behavior**\n- Must be a non-empty string.\n- Should be unique within its scope to avoid ambiguity.\n- Often used as a primary label in UI components and logs.\n- May have length constraints depending on system requirements.\n- Typically case-sensitive or case-insensitive based on implementation.\n\n**Implementation guidance**\n- Validate the input to ensure it meets length and character requirements.\n- Enforce uniqueness if required by the application context.\n- Sanitize input to prevent injection attacks or invalid characters.\n- Support internationalization by allowing Unicode characters if applicable.\n- Provide clear error messages when validation fails.\n\n**Examples**\n- \"John Doe\"\n- \"Invoice_2024_001\"\n- \"MainServer\"\n- \"Project Phoenix\"\n- \"user@example.com\"\n\n**Important notes**\n- Avoid using reserved keywords or special characters that may interfere with system processing.\n- Consider trimming whitespace from the beginning and end of the string.\n- The name should be meaningful and descriptive to improve usability.\n- Changes to the name might affect references or links in other parts of the system.\n\n**Dependency chain**\n- May be referenced by other properties such as IDs, URLs, or display labels.\n- Could be linked to permissions or access control mechanisms.\n- Often used in search and filter operations within the API.\n\n**Technical details**\n- Data type: string.\n- Encoding: UTF-8 recommended.\n- Maximum length: typically defined by system constraints (e.g., 255 characters).\n- Validation rules: regex patterns may be applied to restrict allowed characters.\n- Storage considerations: indexed for fast lookup if used frequently in queries."},"description":{"type":"string","description":"A detailed textual explanation that describes the purpose, characteristics, or context of the associated entity or item. This field is intended to provide users with clear and comprehensive information to better understand the subject it represents.\n**Field behavior**\n- Accepts plain text or formatted text depending on implementation.\n- Should be concise yet informative, avoiding overly technical jargon unless necessary.\n- May support multi-line input to allow detailed explanations.\n- Typically optional but recommended for clarity.\n\n**Implementation guidance**\n- Ensure the description is user-friendly and accessible to the target audience.\n- Avoid including sensitive or confidential information.\n- Use consistent terminology aligned with the overall documentation or product language.\n- Consider supporting markdown or rich text formatting if applicable.\n\n**Examples**\n- \"This API endpoint retrieves user profile information including name, email, and preferences.\"\n- \"A unique identifier assigned to each transaction for tracking purposes.\"\n- \"Specifies the date and time when the event occurred, formatted in ISO 8601.\"\n\n**Important notes**\n- Keep descriptions up to date with any changes in functionality or behavior.\n- Avoid redundancy with other fields; focus on clarifying the entity’s role or usage.\n- Descriptions should not contain executable code or commands.\n\n**Dependency chain**\n- Often linked to the entity or property it describes.\n- May influence user understanding and correct usage of the associated field.\n- Can be referenced in user guides, API documentation, or UI tooltips.\n\n**Technical details**\n- Typically stored as a string data type.\n- May have length constraints depending on system limitations.\n- Encoding should support UTF-8 to accommodate international characters.\n- Should be sanitized to prevent injection attacks if rendered in UI contexts.","maxLength":10240},"unencrypted":{"type":"object","description":"Indicates whether the data or content is stored or transmitted without encryption, meaning it is in plain text and not protected by any cryptographic methods. This flag helps determine if additional security measures are necessary to safeguard sensitive information.\n\n**Field behavior**\n- Represents a boolean state where `true` means data is unencrypted and `false` means data is encrypted.\n- Used to identify if the data requires encryption before storage or transmission.\n- May influence processing logic related to data security and compliance.\n\n**Implementation guidance**\n- Ensure this field is explicitly set to reflect the actual encryption status of the data.\n- Use this flag to trigger encryption routines or security checks when data is unencrypted.\n- Validate the field to prevent false positives that could lead to security vulnerabilities.\n\n**Examples**\n- `true` indicating a password stored in plain text (not recommended).\n- `false` indicating a file that has been encrypted before upload.\n- `true` for a message sent over an unsecured channel.\n\n**Important notes**\n- Storing or transmitting data with `unencrypted` set to `true` can expose sensitive information to unauthorized access.\n- This field does not perform encryption itself; it only signals the encryption status.\n- Proper handling of unencrypted data is critical for compliance with data protection regulations.\n\n**Dependency chain**\n- Often used in conjunction with fields specifying encryption algorithms or keys.\n- May affect downstream processes such as logging, auditing, or alerting mechanisms.\n- Related to security policies and access control configurations.\n\n**Technical details**\n- Typically represented as a boolean value.\n- Should be checked before performing operations that assume data confidentiality.\n- May be integrated with security frameworks to enforce encryption standards."},"sampleData":{"type":"object","description":"An array or collection of sample data entries used to demonstrate or test the functionality of the API or application. This data serves as example input or output to help users understand the expected format, structure, and content. It can include various data types depending on the context, such as strings, numbers, objects, or nested arrays.\n\n**Field behavior**\n- Contains example data that mimics real-world usage scenarios.\n- Can be used for testing, validation, or demonstration purposes.\n- May be optional or required depending on the API specification.\n- Should be representative of typical data to ensure meaningful examples.\n\n**Implementation guidance**\n- Populate with realistic and relevant sample entries.\n- Ensure data types and structures align with the API schema.\n- Avoid sensitive or personally identifiable information.\n- Update regularly to reflect any changes in the API data model.\n\n**Examples**\n- A list of user profiles with names, emails, and roles.\n- Sample product entries with IDs, descriptions, and prices.\n- Example sensor readings with timestamps and values.\n- Mock transaction records with amounts and statuses.\n\n**Important notes**\n- Sample data is for illustrative purposes and may not reflect actual production data.\n- Should not be used for live processing or decision-making.\n- Ensure compliance with data privacy and security standards when creating samples.\n\n**Dependency chain**\n- Dependent on the data model definitions within the API.\n- May influence or be influenced by validation rules and schema constraints.\n- Often linked to documentation and testing frameworks.\n\n**Technical details**\n- Typically formatted as JSON arrays or objects.\n- Can include nested structures to represent complex data.\n- Size and complexity should be balanced to optimize performance and clarity.\n- May be embedded directly in API responses or provided as separate resources."},"distributed":{"type":"boolean","description":"Indicates whether the import is using a distributed adaptor, such as a NetSuite Distributed Import.\n**Field behavior**\n- Accepts boolean values: true or false.\n- When true, the import is using a distributed adaptor such as a NetSuite Distributed Import.\n- When false, the import is not using a distributed adaptor.\n\n**Examples**\n- true: When the adaptorType is NetSuiteDistributedImport\n- false: In all other cases"},"maxAttempts":{"type":"number","description":"Specifies the maximum number of attempts allowed for a particular operation or action before it is considered failed or terminated. This property helps control retry logic and prevents infinite loops or excessive retries in processes such as authentication, data submission, or task execution.  \n**Field behavior:**  \n- Defines the upper limit on retry attempts for an operation.  \n- Once the maximum attempts are reached, no further retries are made.  \n- Can be used to trigger fallback mechanisms or error handling after exceeding attempts.  \n**Implementation** GUIDANCE:  \n- Set to a positive integer value representing the allowed retry count.  \n- Should be configured based on the operation's criticality and expected failure rates.  \n- Consider exponential backoff or delay strategies in conjunction with maxAttempts.  \n- Validate input to ensure it is a non-negative integer.  \n**Examples:**  \n- 3 (allowing up to three retry attempts)  \n- 5 (permitting five attempts before failure)  \n- 0 (disabling retries entirely)  \n**Important notes:**  \n- Setting this value too high may cause unnecessary resource consumption or delays.  \n- Setting it too low may result in premature failure without sufficient retry opportunities.  \n- Should be used in combination with timeout settings for robust error handling.  \n**Dependency chain:**  \n- Often used alongside properties like retryDelay, timeout, or backoffStrategy.  \n- May influence or be influenced by error handling or circuit breaker configurations.  \n**Technical details:**  \n- Typically represented as an integer data type.  \n- Enforced by the application logic or middleware managing retries.  \n- May be configurable at runtime or deployment time depending on system design."},"ignoreExisting":{"type":"boolean","description":"**CRITICAL FIELD:** Controls whether the import should skip records that already exist in the target system rather than attempting to update them or throwing errors.\n\nWhen set to true, the import will:\n- Check if each record already exists in the target system (using a lookup configured in the import, or if the record coming into the import has a value populated in the field configured in the ignoreExtract property)\n- Skip processing for records that are found to already exist\n- Continue processing only records that don't exist (new records)\n- This is for CREATE operations where you want to avoid duplicates\n\nWhen set to false or undefined (default):\n- All records are processed normally in the creation process (if applicable)\n- Existing records may be duplicated or may cause errors depending on the API\n- No special existence checking is performed\n\n**When to set this to true (MUST detect these PATTERNS)**\n\n**ALWAYS set ignoreExisting to true** when the user's prompt includes ANY of these patterns:\n\n**Primary Patterns (very common)**\n- \"ignore existing\" / \"ignoring existing\" / \"ignore any existing\"\n- \"skip existing\" / \"skipping existing\" / \"skip any existing\"\n- \"while ignoring existing\" / \"while skipping existing\"\n- \"skip duplicates\" / \"avoid duplicates\" / \"ignore duplicates\"\n\n**Secondary Patterns**\n- \"only create new\" / \"only add new\" / \"create new only\"\n- \"don't update existing\" / \"avoid updating existing\" / \"without updating\"\n- \"create if doesn't exist\" / \"add if doesn't exist\"\n- \"insert only\" / \"create only\"\n- \"skip if already exists\" / \"ignore if exists\"\n- \"don't process existing\" / \"bypass existing\"\n\n**Context Clues**\n- User says \"create\" or \"insert\" AND mentions checking/matching against existing data\n- User wants to add records but NOT modify what's already there\n- User is concerned about duplicate prevention during creation\n\n**Examples**\n\n**Should set ignoreExisting: true**\n- \"Create customer records in Shopify while ignoring existing customers\" → ignoreExisting: true\n- \"Create vendor records while ignoring existing vendors\" → ignoreExisting: true\n- \"Import new products only, skip existing ones\" → ignoreExisting: true\n- \"Add orders to the system, ignore any that already exist\" → ignoreExisting: true\n- \"Create accounts but don't update existing ones\" → ignoreExisting: true\n- \"Insert new contacts, skip duplicates\" → ignoreExisting: true\n- \"Create records, skipping any that match existing data\" → ignoreExisting: true\n\n**Should not set ignoreExisting: true**\n- \"Update existing customer records\" → ignoreExisting: false (update operation)\n- \"Create or update records\" → ignoreExisting: false (upsert operation)\n- \"Sync all records\" → ignoreExisting: false (sync/upsert operation)\n\n**Important**\n- This field is typically used with INSERT operations\n- Common pattern: \"Create X while ignoring existing X\" → MUST set to true\n\n**When to leave this false/undefined**\n\nLeave this false or undefined when:\n- The prompt doesn't mention skipping or ignoring existing records\n- The import is only performing updates (no inserts)\n- The prompt says \"sync\", \"update\", or \"upsert\"\n\n**Important notes**\n- This flag requires a properly configured lookup to identify existing records, or have an ignoreExtract field configured to identify the field that is used to determine if the record already exists.\n- The lookup typically checks a unique identifier field (id, email, external_id, etc.)\n- When true, existing records are silently skipped (not updated, not errored)\n- Use this for insert-only or create-only operations\n- Do NOT use this if the user wants to update existing records\n\n**Technical details**\n- Works in conjunction with lookup configurations to determine if a record exists or has an ignoreExtract field configured to determine if the record already exists.\n- The lookup queries the target system before attempting to create the record\n- If the lookup returns a match, the record is skipped\n- If the lookup returns no match, the record is processed normally"},"ignoreMissing":{"type":"boolean","description":"Indicates whether the system should ignore missing or non-existent resources during processing. When set to true, the operation will proceed without error even if some referenced items are not found; when false, the absence of required resources will cause the operation to fail or return an error.  \n**Field behavior:**  \n- Controls error handling related to missing resources.  \n- Allows operations to continue gracefully when some data is unavailable.  \n- Affects the robustness and fault tolerance of the process.  \n**Implementation guidance:**  \n- Use true to enable lenient processing where missing data is acceptable.  \n- Use false to enforce strict validation and fail fast on missing resources.  \n- Ensure that downstream logic can handle partial results if ignoring missing items.  \n**Examples:**  \n- ignoreMissing: true — skips over missing files during batch processing.  \n- ignoreMissing: false — throws an error if a referenced database entry is not found.  \n**Important notes:**  \n- Setting this to true may lead to incomplete results if missing data is critical.  \n- Default behavior should be clearly documented to avoid unexpected failures.  \n- Consider logging or reporting missing items even when ignoring them.  \n**Dependency** CHAIN:  \n- Often used in conjunction with resource identifiers or references.  \n- May impact validation and error reporting modules.  \n**Technical details:**  \n- Typically a boolean flag.  \n- Influences control flow and exception handling mechanisms.  \n- Should be clearly defined in API contracts to manage client expectations."},"idLockTemplate":{"type":"string","description":"The unique identifier for the lock template used to define the configuration and behavior of a lock within the system. This ID links the lock instance to a predefined template that specifies its properties, access rules, and operational parameters.\n\n**Field behavior**\n- Uniquely identifies a lock template within the system.\n- Used to associate a lock instance with its configuration template.\n- Typically immutable once assigned to ensure consistent behavior.\n\n**Implementation guidance**\n- Must be a valid identifier corresponding to an existing lock template.\n- Should be validated against the list of available templates before assignment.\n- Ensure uniqueness to prevent conflicts between different lock configurations.\n\n**Examples**\n- \"template-12345\"\n- \"lockTemplateA1B2C3\"\n- \"LT-987654321\"\n\n**Important notes**\n- Changing the template ID after lock creation may lead to inconsistent lock behavior.\n- The template defines critical lock parameters; ensure the correct template is referenced.\n- This field is essential for lock provisioning and management workflows.\n\n**Dependency chain**\n- Depends on the existence of predefined lock templates in the system.\n- Used by lock management and access control modules to apply configurations.\n\n**Technical details**\n- Typically represented as a string or UUID.\n- Should conform to the system’s identifier format standards.\n- Indexed for efficient lookup and retrieval in the database."},"dataURITemplate":{"type":"string","description":"A template string used to construct data URIs dynamically by embedding variable placeholders that are replaced with actual values at runtime. This template facilitates the generation of standardized data URIs for accessing or referencing resources within the system.\n\n**Field behavior**\n- Accepts a string containing placeholders for variables.\n- Placeholders are replaced with corresponding runtime values to form a complete data URI.\n- Enables dynamic and flexible URI generation based on context or input parameters.\n- Supports standard URI encoding where necessary to ensure valid URI formation.\n\n**Implementation guidance**\n- Define clear syntax for variable placeholders within the template (e.g., using curly braces or another delimiter).\n- Ensure that all required variables are provided at runtime to avoid incomplete URIs.\n- Validate the final constructed URI to confirm it adheres to URI standards.\n- Consider supporting optional variables with default values to increase template flexibility.\n\n**Examples**\n- \"data:text/plain;base64,{base64EncodedData}\"\n- \"data:{mimeType};charset={charset},{data}\"\n- \"data:image/{imageFormat};base64,{encodedImageData}\"\n\n**Important notes**\n- The template must produce a valid data URI after variable substitution.\n- Proper encoding of variable values is critical to prevent malformed URIs.\n- This property is essential for scenarios requiring inline resource embedding or data transmission via URIs.\n- Misconfiguration or missing variables can lead to invalid or unusable URIs.\n\n**Dependency chain**\n- Relies on the availability of runtime variables corresponding to placeholders.\n- May depend on encoding utilities to prepare variable values.\n- Often used in conjunction with data processing or resource generation components.\n\n**Technical details**\n- Typically a UTF-8 encoded string.\n- Supports standard URI schemes and encoding rules.\n- Variable placeholders should be clearly defined and consistently parsed.\n- May integrate with templating engines or string interpolation mechanisms."},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"blobKeyPath":{"type":"string","description":"Specifies the file system path to the blob key, which uniquely identifies the binary large object (blob) within the storage system. This path is used to locate and access the blob data efficiently during processing or retrieval operations.\n**Field behavior**\n- Represents a string path pointing to the location of the blob key.\n- Used as a reference to access the associated blob data.\n- Typically immutable once set to ensure consistent access.\n- May be required for operations involving blob retrieval or manipulation.\n**Implementation guidance**\n- Ensure the path is valid and correctly formatted according to the storage system's conventions.\n- Validate the existence of the blob key at the specified path before processing.\n- Handle cases where the path may be null or empty gracefully.\n- Secure the path to prevent unauthorized access to blob data.\n**Examples**\n- \"/data/blobs/abc123def456\"\n- \"blobstore/keys/2024/06/blobkey789\"\n- \"s3://bucket-name/blob-keys/key123\"\n**Important notes**\n- The blobKeyPath must correspond to an existing blob key in the storage backend.\n- Incorrect or malformed paths can lead to failures in blob retrieval.\n- Access permissions to the path must be properly configured.\n- The format of the path may vary depending on the underlying storage technology.\n**Dependency chain**\n- Dependent on the storage system's directory or key structure.\n- Used by blob retrieval and processing components.\n- May be linked to metadata describing the blob.\n**Technical details**\n- Typically represented as a UTF-8 encoded string.\n- May include hierarchical directory structures or URI schemes.\n- Should be sanitized to prevent injection or path traversal vulnerabilities.\n- Often used as a lookup key in blob storage APIs or databases."},"blob":{"type":"boolean","description":"The binary large object (blob) representing the raw data content to be processed, stored, or transmitted. This field typically contains encoded or serialized data such as images, files, or other multimedia content in a compact binary format.\n\n**Field behavior**\n- Holds the actual binary data payload.\n- Can represent various data types including images, documents, or other file formats.\n- May be encoded in formats like Base64 for safe transmission over text-based protocols.\n- Treated as opaque data by the API, with no interpretation unless specified.\n\n**Implementation guidance**\n- Ensure the blob data is properly encoded (e.g., Base64) if the transport medium requires text-safe encoding.\n- Validate the size and format of the blob to meet API constraints.\n- Handle decoding and encoding consistently on both client and server sides.\n- Use streaming or chunking for very large blobs to optimize performance.\n\n**Examples**\n- A Base64-encoded JPEG image file.\n- A serialized JSON object converted into a binary format.\n- A PDF document encoded as a binary blob.\n- An audio file represented as a binary stream.\n\n**Important notes**\n- The blob content is typically opaque and should not be altered during transmission.\n- Size limits may apply depending on API or transport constraints.\n- Proper encoding and decoding are critical to avoid data corruption.\n- Security considerations should be taken into account when handling binary data.\n\n**Dependency chain**\n- May depend on encoding schemes (e.g., Base64) for safe transmission.\n- Often used in conjunction with metadata fields describing the blob type or format.\n- Requires appropriate content-type headers or descriptors for correct interpretation.\n\n**Technical details**\n- Usually represented as a byte array or Base64-encoded string in JSON APIs.\n- May require MIME type specification to indicate the nature of the data.\n- Handling may involve buffer management and memory considerations.\n- Supports binary-safe transport mechanisms to preserve data integrity."},"assistant":{"type":"string","description":"Specifies the configuration and behavior settings for the AI assistant that will interact with the user. This property defines how the assistant responds, including its personality, knowledge scope, response style, and any special instructions or constraints that guide its operation.\n\n**Field behavior**\n- Determines the assistant's tone, style, and manner of communication.\n- Controls the knowledge base or data sources the assistant can access.\n- Enables customization of the assistant's capabilities and limitations.\n- May include parameters for language, verbosity, and response format.\n\n**Implementation guidance**\n- Ensure the assistant configuration aligns with the intended user experience.\n- Validate that all required sub-properties within the assistant configuration are correctly set.\n- Support dynamic updates to the assistant settings to adapt to different contexts or user needs.\n- Provide defaults for unspecified settings to maintain consistent behavior.\n\n**Examples**\n- Setting the assistant to a formal tone with technical expertise.\n- Configuring the assistant to provide concise answers with references.\n- Defining the assistant to operate within a specific domain, such as healthcare or finance.\n- Enabling multi-language support for the assistant responses.\n\n**Important notes**\n- Changes to the assistant property can significantly affect user interaction quality.\n- Properly securing and validating assistant configurations is critical to prevent misuse.\n- The assistant's behavior should comply with ethical guidelines and privacy regulations.\n- Overly restrictive settings may limit the assistant's usefulness, while too broad settings may reduce relevance.\n\n**Dependency chain**\n- May depend on user preferences or session context.\n- Interacts with the underlying AI model and its capabilities.\n- Influences downstream processing of user inputs and outputs.\n- Can be linked to external knowledge bases or APIs for enhanced responses.\n\n**Technical details**\n- Typically structured as an object containing multiple nested properties.\n- May include fields such as personality traits, knowledge cutoff dates, and response constraints.\n- Supports serialization and deserialization for API communication.\n- Requires compatibility with the AI platform's configuration schema."},"deleteAfterImport":{"type":"boolean","description":"Indicates whether the source file should be deleted automatically after the import process completes successfully. This boolean flag helps manage storage by removing files that are no longer needed once their data has been ingested.\n\n**Field behavior**\n- When set to true, the system deletes the source file immediately after a successful import.\n- When set to false or omitted, the source file remains intact after import.\n- Deletion only occurs if the import process completes without errors.\n\n**Implementation guidance**\n- Validate that the import was successful before deleting the file.\n- Ensure proper permissions are in place to delete the file.\n- Consider logging deletion actions for audit purposes.\n- Provide clear user documentation about the implications of enabling this flag.\n\n**Examples**\n- true: The file \"data.csv\" will be removed after import.\n- false: The file \"data.csv\" will remain after import for manual review or backup.\n\n**Important notes**\n- Enabling this flag may result in data loss if the file is needed for re-import or troubleshooting.\n- Use with caution in environments where file retention policies are strict.\n- This flag does not affect files that fail to import.\n\n**Dependency chain**\n- Depends on the successful completion of the import operation.\n- May interact with file system permissions and cleanup routines.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Triggers a file deletion API or system call post-import.\n- Should handle exceptions gracefully to avoid orphaned files or unintended deletions."},"assistantMetadata":{"type":"object","description":"Metadata containing additional information about the assistant, such as configuration details, versioning, capabilities, and any custom attributes that help in managing or identifying the assistant's behavior and context. This metadata supports enhanced control, monitoring, and customization of the assistant's interactions.\n\n**Field behavior**\n- Holds supplementary data that describes or configures the assistant.\n- Can include static or dynamic information relevant to the assistant's operation.\n- Used to influence or inform the assistant's responses and functionality.\n- May be updated or extended to reflect changes in the assistant's capabilities or environment.\n\n**Implementation guidance**\n- Structure metadata in a clear, consistent format (e.g., key-value pairs).\n- Ensure sensitive information is not exposed through metadata.\n- Use metadata to enable feature toggles, version tracking, or context awareness.\n- Validate metadata content to maintain integrity and compatibility.\n\n**Examples**\n- Version number of the assistant (e.g., \"v1.2.3\").\n- Supported languages or locales.\n- Enabled features or modules.\n- Custom tags indicating the assistant's domain or purpose.\n\n**Important notes**\n- Metadata should not contain user-specific or sensitive personal data.\n- Keep metadata concise to avoid performance overhead.\n- Changes in metadata may affect assistant behavior; manage updates carefully.\n- Metadata is primarily for internal use and may not be exposed to end users.\n\n**Dependency chain**\n- May depend on assistant configuration settings.\n- Can influence or be referenced by response generation logic.\n- Interacts with monitoring and analytics systems for tracking.\n\n**Technical details**\n- Typically represented as a JSON object or similar structured data.\n- Should be easily extensible to accommodate future attributes.\n- Must be compatible with the overall assistant API schema.\n- Should support serialization and deserialization without data loss."},"useTechAdaptorForm":{"type":"boolean","description":"Indicates whether the technical adaptor form should be utilized in the current process or workflow. This boolean flag determines if the system should enable and display the technical adaptor form interface for user interaction or automated processing.\n\n**Field behavior**\n- When set to true, the technical adaptor form is activated and presented to the user or system.\n- When set to false, the technical adaptor form is bypassed or hidden.\n- Influences the flow of data input and validation related to technical adaptor configurations.\n\n**Implementation guidance**\n- Default to false if the technical adaptor form is optional or not always required.\n- Ensure that enabling this flag triggers all necessary UI components and backend processes related to the technical adaptor form.\n- Validate that dependent fields or modules respond appropriately when this flag changes state.\n\n**Examples**\n- true: The system displays the technical adaptor form for configuration.\n- false: The system skips the technical adaptor form and proceeds without it.\n\n**Important notes**\n- Changing this flag may affect downstream processing and data integrity.\n- Must be synchronized with user permissions and feature availability.\n- Should be clearly documented in user guides if exposed in UI settings.\n\n**Dependency chain**\n- May depend on user roles or feature toggles enabling technical adaptor functionality.\n- Could influence or be influenced by other configuration flags related to adaptor workflows.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Used in conditional logic to control UI rendering and backend processing paths.\n- May trigger event listeners or hooks when its value changes."},"distributedAdaptorData":{"type":"object","description":"Contains configuration and operational data specific to distributed adaptors used within the system. This data includes parameters, status information, and metadata necessary for managing and monitoring distributed adaptor instances across different nodes or environments. It facilitates synchronization, performance tuning, and error handling for adaptors operating in a distributed architecture.\n\n**Field behavior**\n- Holds detailed information about each distributed adaptor instance.\n- Supports dynamic updates to reflect real-time adaptor status and configuration changes.\n- Enables centralized management and monitoring of distributed adaptors.\n- May include nested structures representing various adaptor attributes and metrics.\n\n**Implementation guidance**\n- Ensure data consistency and integrity when updating distributedAdaptorData.\n- Use appropriate serialization formats to handle complex nested data.\n- Implement validation to verify the correctness of adaptor configuration parameters.\n- Design for scalability to accommodate varying numbers of distributed adaptors.\n\n**Examples**\n- Configuration settings for a distributed database adaptor including connection strings and timeout values.\n- Status reports indicating the health and performance metrics of adaptors deployed across multiple servers.\n- Metadata describing adaptor version, deployment environment, and last synchronization timestamp.\n\n**Important notes**\n- This property is critical for systems relying on distributed adaptors to function correctly.\n- Changes to this data should be carefully managed to avoid disrupting adaptor operations.\n- Sensitive information within adaptor data should be secured and access-controlled.\n\n**Dependency chain**\n- Dependent on the overall system configuration and deployment topology.\n- Interacts with monitoring and management modules that consume adaptor data.\n- May influence load balancing and failover mechanisms within the distributed system.\n\n**Technical details**\n- Typically represented as a complex object or map with key-value pairs.\n- May include arrays or lists to represent multiple adaptor instances.\n- Requires efficient parsing and serialization to minimize performance overhead.\n- Should support versioning to handle changes in adaptor data schema over time."},"filter":{"allOf":[{"description":"Filter configuration for selectively processing records during import operations.\n\n**Import-specific behavior**\n\n**Pre-Import Filtering**: Filters are applied to incoming records before they are sent to the destination system. Records that don't match the filter criteria are silently dropped and not imported.\n\n**Available filter fields**\n\nThe fields available for filtering are the data fields from each record being imported.\n"},{"$ref":"#/components/schemas/Filter"}]},"traceKeyTemplate":{"type":"string","description":"A template string used to generate unique trace keys for tracking and correlating requests across distributed systems. This template supports placeholders that can be dynamically replaced with runtime values such as timestamps, unique identifiers, or contextual metadata to create meaningful and consistent trace keys.\n\n**Field behavior**\n- Defines the format and structure of trace keys used in logging and monitoring.\n- Supports dynamic insertion of variables or placeholders to customize trace keys per request.\n- Ensures trace keys are unique and consistent for effective traceability.\n- Can be used to correlate logs, metrics, and traces across different services.\n\n**Implementation guidance**\n- Use clear and descriptive placeholders that map to relevant runtime data (e.g., {timestamp}, {requestId}).\n- Ensure the template produces trace keys that are unique enough to avoid collisions.\n- Validate the template syntax before applying it to avoid runtime errors.\n- Consider including service names or environment identifiers for multi-service deployments.\n- Keep the template concise to avoid excessively long trace keys.\n\n**Examples**\n- \"trace-{timestamp}-{requestId}\"\n- \"{serviceName}-{env}-{uniqueId}\"\n- \"traceKey-{userId}-{sessionId}-{epochMillis}\"\n\n**Important notes**\n- The template must be compatible with the tracing and logging infrastructure in use.\n- Placeholders should be well-defined and documented to ensure consistent usage.\n- Improper templates may lead to trace key collisions or loss of traceability.\n- Avoid sensitive information in trace keys to maintain security and privacy.\n\n**Dependency chain**\n- Relies on runtime context or metadata to replace placeholders.\n- Integrates with logging, monitoring, and tracing systems that consume trace keys.\n- May depend on unique identifier generators or timestamp providers.\n\n**Technical details**\n- Typically implemented as a string with placeholder syntax (e.g., curly braces).\n- Requires a parser or formatter to replace placeholders with actual values at runtime.\n- Should support extensibility to add new placeholders as needed.\n- May need to conform to character restrictions imposed by downstream systems."},"mockResponse":{"type":"object","description":"Defines a predefined response that the system will return when the associated API endpoint is invoked, allowing for simulation of real API behavior without requiring actual backend processing. This is particularly useful for testing, development, and demonstration purposes where consistent and controlled responses are needed.\n\n**Field behavior**\n- Returns the specified mock data instead of executing the real API logic.\n- Can simulate various response scenarios including success, error, and edge cases.\n- Overrides the default response when enabled.\n- Supports static or dynamic content depending on implementation.\n\n**Implementation guidance**\n- Ensure the mock response format matches the expected API response schema.\n- Use realistic data to closely mimic actual API behavior.\n- Update mock responses regularly to reflect changes in the real API.\n- Consider including headers, status codes, and body content in the mock response.\n- Provide clear documentation on when and how the mock response is used.\n\n**Examples**\n- Returning a fixed JSON object representing a user profile.\n- Simulating a 404 error response with an error message.\n- Providing a list of items with pagination metadata.\n- Mocking a successful transaction confirmation message.\n\n**Important notes**\n- Mock responses should not be used in production environments unless explicitly intended.\n- They are primarily for development, testing, and demonstration.\n- Overuse of mock responses can mask issues in the real API implementation.\n- Ensure that consumers of the API are aware when mock responses are active.\n\n**Dependency chain**\n- Dependent on the API endpoint configuration.\n- May interact with request parameters to determine the appropriate mock response.\n- Can be linked with feature flags or environment settings to toggle mock behavior.\n\n**Technical details**\n- Typically implemented as static JSON or XML payloads.\n- May include HTTP status codes and headers.\n- Can be integrated with API gateways or mocking tools.\n- Supports conditional logic in advanced implementations to vary responses."},"_ediProfileId":{"type":"string","format":"objectId","description":"The unique identifier for the Electronic Data Interchange (EDI) profile associated with the transaction or entity. This ID links the current data to a specific EDI configuration that defines the format, protocols, and trading partner details used for electronic communication.\n\n**Field behavior**\n- Uniquely identifies an EDI profile within the system.\n- Used to retrieve or reference EDI settings and parameters.\n- Must be consistent and valid to ensure proper EDI processing.\n- Typically immutable once assigned to maintain data integrity.\n\n**Implementation guidance**\n- Ensure the ID corresponds to an existing and active EDI profile in the system.\n- Validate the format and existence of the ID before processing transactions.\n- Use this ID to fetch EDI-specific rules, mappings, and partner information.\n- Secure the ID to prevent unauthorized changes or misuse.\n\n**Examples**\n- \"EDI12345\"\n- \"PROF-67890\"\n- \"X12_PROFILE_001\"\n\n**Important notes**\n- This ID is critical for routing and formatting EDI messages correctly.\n- Incorrect or missing IDs can lead to failed or misrouted EDI transmissions.\n- The ID should be managed centrally to avoid duplication or conflicts.\n\n**Dependency chain**\n- Depends on the existence of predefined EDI profiles in the system.\n- Used by EDI processing modules to apply correct communication standards.\n- May be linked to trading partner configurations and document types.\n\n**Technical details**\n- Typically a string or alphanumeric value.\n- Stored in a database with indexing for quick lookup.\n- May follow a naming convention defined by the organization or EDI standards.\n- Used as a foreign key in relational data models involving EDI transactions."},"parsers":{"type":["string","null"],"enum":["1"],"description":"A collection of parser configurations that define how input data should be interpreted and processed. Each parser specifies rules, patterns, or formats to extract meaningful information from raw data sources, enabling the system to handle diverse data types and structures effectively. This property allows customization and extension of parsing capabilities to accommodate various input formats.\n\n**Field behavior**\n- Accepts multiple parser definitions as an array or list.\n- Each parser operates independently to process specific data formats.\n- Parsers are applied in the order they are defined unless otherwise specified.\n- Supports enabling or disabling individual parsers dynamically.\n- Can include built-in or custom parser implementations.\n\n**Implementation guidance**\n- Ensure parsers are well-defined with clear matching criteria and extraction rules.\n- Validate parser configurations to prevent conflicts or overlaps.\n- Provide mechanisms to add, update, or remove parsers without downtime.\n- Support extensibility to integrate new parsing logic as needed.\n- Document each parser’s purpose and expected input/output formats.\n\n**Examples**\n- A JSON parser that extracts fields from JSON-formatted input.\n- A CSV parser that splits input lines into columns based on delimiters.\n- A regex-based parser that identifies patterns within unstructured text.\n- An XML parser that navigates hierarchical data structures.\n- A custom parser designed to interpret proprietary log file formats.\n\n**Important notes**\n- Incorrect parser configurations can lead to data misinterpretation or processing errors.\n- Parsers should be optimized for performance to handle large volumes of data efficiently.\n- Consider security implications when parsing untrusted input to avoid injection attacks.\n- Testing parsers with representative sample data is crucial for reliability.\n- Parsers may depend on external libraries or modules; ensure compatibility.\n\n**Dependency chain**\n- Relies on input data being available and accessible.\n- May depend on schema definitions or data contracts for accurate parsing.\n- Interacts with downstream components that consume parsed output.\n- Can be influenced by global settings such as character encoding or locale.\n- May require synchronization with data validation and transformation steps.\n\n**Technical details**\n- Typically implemented as modular components or plugins.\n- Configurations may include pattern definitions, field mappings, and error handling rules.\n- Supports various data formats including text, binary, and structured documents.\n- May expose APIs or interfaces for runtime configuration and monitoring.\n- Often integrated with logging and debugging tools to trace parsing operations."},"hooks":{"type":"object","properties":{"preMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the pre-mapping hook phase. This function allows for custom logic to be applied before the main mapping process begins, enabling data transformation, validation, or any preparatory steps required. It should be defined in a way that it can be invoked with the appropriate context and input data.\n\n**Field behavior**\n- Executed before the mapping process starts.\n- Receives input data and context for processing.\n- Can modify or validate data prior to mapping.\n- Should return the processed data or relevant output for the next stage.\n\n**Implementation guidance**\n- Define the function with clear input parameters and expected output.\n- Ensure the function handles errors gracefully to avoid interrupting the mapping flow.\n- Keep the function focused on pre-mapping concerns only.\n- Test the function independently to verify its behavior before integration.\n\n**Examples**\n- A function that normalizes input data formats.\n- A function that filters out invalid entries.\n- A function that enriches data with additional attributes.\n- A function that logs input data for auditing purposes.\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous operations.\n- Avoid side effects that could impact other parts of the mapping process.\n- Ensure compatibility with the overall data pipeline and mapping framework.\n- Document the function’s purpose and usage clearly for maintainability.\n\n**Dependency chain**\n- Invoked after the preMap hook is triggered.\n- Outputs data consumed by the subsequent mapping logic.\n- May depend on external utilities or services for data processing.\n\n**Technical details**\n- Typically implemented as a JavaScript or TypeScript function.\n- Receives parameters such as input data object and context metadata.\n- Returns a transformed data object or a promise resolving to it.\n- Integrated into the mapping engine’s hook system for execution timing."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed during the pre-mapping hook phase. This ID is used to reference and invoke the specific script that performs custom logic or transformations before the main mapping process begins.\n**Field behavior**\n- Specifies which script is triggered before the mapping operation.\n- Must correspond to a valid and existing script within the system.\n- Determines the custom pre-processing logic applied to data.\n**Implementation guidance**\n- Ensure the script ID is correctly registered and accessible in the environment.\n- Validate the script's compatibility with the preMap hook context.\n- Use consistent naming conventions for script IDs to avoid conflicts.\n**Examples**\n- \"script12345\"\n- \"preMapTransform_v2\"\n- \"hookScript_001\"\n**Important notes**\n- An invalid or missing script ID will result in the preMap hook being skipped or causing an error.\n- The script associated with this ID should be optimized for performance to avoid delays.\n- Permissions must be set appropriately to allow execution of the referenced script.\n**Dependency chain**\n- Depends on the existence of the script repository or script management system.\n- Linked to the preMap hook lifecycle event in the mapping process.\n- May interact with input data and influence subsequent mapping steps.\n**Technical details**\n- Typically represented as a string identifier.\n- Used internally by the system to locate and execute the script.\n- May be stored in a database or configuration file referencing available scripts."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the current operation or context. This ID is used to reference and manage the specific stack within the system, ensuring that all related resources and actions are correctly linked.  \n**Field behavior:**  \n- Uniquely identifies a stack instance within the environment.  \n- Used internally to track and manage stack-related operations.  \n- Typically immutable once assigned for a given stack lifecycle.  \n**Implementation guidance:**  \n- Should be generated or assigned by the system managing the stacks.  \n- Must be validated to ensure uniqueness within the scope of the application.  \n- Should be securely stored and transmitted to prevent unauthorized access or manipulation.  \n**Examples:**  \n- \"stack-12345abcde\"  \n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/50d6f0a0-1234-11e8-9a8b-0a1234567890\"  \n- \"projX-envY-stackZ\"  \n**Important notes:**  \n- This ID is critical for correlating resources and operations to the correct stack.  \n- Changing the stack ID after creation can lead to inconsistencies and errors.  \n- Should not be confused with other identifiers like resource IDs or deployment IDs.  \n**Dependency** CHAIN:  \n- Depends on the stack creation or registration process to obtain the ID.  \n- Used by hooks, deployment scripts, and resource managers to reference the stack.  \n**Technical details:**  \n- Typically a string value, possibly following a specific format or pattern.  \n- May include alphanumeric characters, dashes, and colons depending on the system.  \n- Often used as a key in databases or APIs to retrieve stack information."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the preMap hook, allowing customization of its execution prior to the mapping process. This property enables users to specify options such as input validation rules, transformation parameters, or conditional logic that tailor the hook's operation to specific requirements.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the preMap hook.\n- Influences how the preMap hook processes data before mapping occurs.\n- Can enable or disable specific features or validations within the hook.\n- Supports dynamic adjustment of hook behavior based on provided settings.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema to ensure correctness.\n- Provide default values for optional settings to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n- Allow extensibility to accommodate future configuration parameters without breaking compatibility.\n\n**Examples**\n- Setting a flag to enable strict input validation before mapping.\n- Specifying a list of fields to exclude from the mapping process.\n- Defining transformation rules such as trimming whitespace or converting case.\n- Providing conditional logic parameters to skip mapping under certain conditions.\n\n**Important notes**\n- Incorrect configuration values may cause the preMap hook to fail or behave unexpectedly.\n- Configuration should be tested thoroughly to ensure it aligns with the intended data processing flow.\n- Changes to configuration may affect downstream processes relying on the mapped data.\n- Sensitive information should not be included in the configuration to avoid security risks.\n\n**Dependency chain**\n- Depends on the preMap hook being enabled and invoked during the processing pipeline.\n- Influences the input data that will be passed to subsequent mapping stages.\n- May interact with other hook configurations or global settings affecting data transformation.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and applied at runtime before the mapping logic executes.\n- Supports nested configuration properties for complex customization.\n- Should be immutable during the execution of the preMap hook to ensure consistency."}},"description":"A hook function that is executed before the mapping process begins. This function allows for custom preprocessing or transformation of data prior to the main mapping logic being applied. It is typically used to modify input data, validate conditions, or set up necessary context for the mapping operation.\n\n**Field behavior**\n- Invoked once before the mapping starts.\n- Receives the raw input data or context.\n- Can modify or replace the input data before mapping.\n- Can abort or alter the flow based on custom logic.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment.\n- Ensure it returns the modified data or context for the mapping to proceed.\n- Handle errors gracefully to avoid disrupting the mapping process.\n- Use for data normalization, enrichment, or validation before mapping.\n\n**Examples**\n- Sanitizing input strings to remove unwanted characters.\n- Adding default values to missing fields.\n- Logging or auditing input data before transformation.\n- Validating input schema and throwing errors if invalid.\n\n**Important notes**\n- This hook is optional; if not provided, mapping proceeds with original input.\n- Changes made here directly affect the mapping outcome.\n- Avoid heavy computations to prevent performance bottlenecks.\n- Should not perform side effects that depend on mapping results.\n\n**Dependency chain**\n- Executed before the main mapping function.\n- Influences the data passed to subsequent mapping hooks or steps.\n- May affect downstream validation or output generation.\n\n**Technical details**\n- Typically implemented as a function with signature (inputData, context) => modifiedData.\n- Supports both synchronous return values and Promises for async operations.\n- Integrated into the mapping pipeline as the first step.\n- Can be configured or replaced depending on mapping framework capabilities."},"postMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed during the post-map hook phase. This function is invoked after the mapping process completes, allowing for custom processing, validation, or transformation of the mapped data. It should be defined within the appropriate scope and adhere to the expected signature for post-map hook functions.\n\n**Field behavior**\n- Defines the callback function to run after the mapping operation.\n- Enables customization and extension of the mapping logic.\n- Executes only once per mapping cycle during the post-map phase.\n- Can modify or augment the mapped data before final output.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function in the execution context.\n- The function should handle any necessary error checking and data validation.\n- Avoid long-running or blocking operations within the function to maintain performance.\n- Document the function’s expected inputs and outputs clearly.\n\n**Examples**\n- \"validateMappedData\"\n- \"transformPostMapping\"\n- \"logMappingResults\"\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous behavior if supported.\n- If the function is not defined or invalid, the post-map hook will be skipped without error.\n- This hook is optional but recommended for complex mapping scenarios requiring additional processing.\n\n**Dependency chain**\n- Depends on the mapping process completing successfully.\n- May rely on data structures produced by the mapping phase.\n- Should be compatible with other hooks or lifecycle events in the mapping pipeline.\n\n**Technical details**\n- Expected to be a string representing the function name.\n- The function should accept parameters as defined by the hook interface.\n- Execution context must have access to the function at runtime.\n- Errors thrown within the function may affect the overall mapping operation depending on error handling policies."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed in the postMap hook. This ID is used to reference and invoke the specific script after the mapping process completes, enabling custom logic or transformations to be applied.  \n**Field behavior:**  \n- Accepts a string representing the script's unique ID.  \n- Must correspond to an existing script within the system.  \n- Triggers the execution of the associated script after the mapping operation finishes.  \n**Implementation guidance:**  \n- Ensure the script ID is valid and the script is properly deployed before referencing it here.  \n- Use this field to extend or customize the behavior of the mapping process via scripting.  \n- Validate the script ID to prevent runtime errors during hook execution.  \n**Examples:**  \n- \"script_12345\"  \n- \"postMapTransformScript\"  \n- \"customHookScript_v2\"  \n**Important notes:**  \n- The script referenced must be compatible with the postMap hook context.  \n- Incorrect or missing script IDs will result in the hook not executing as intended.  \n- This field is optional if no post-mapping script execution is required.  \n**Dependency chain:**  \n- Depends on the existence of the script repository or script management system.  \n- Relies on the postMap hook lifecycle event to trigger execution.  \n**Technical details:**  \n- Typically a string data type.  \n- Used internally to fetch and execute the script logic.  \n- May require permissions or access rights to execute the referenced script."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-map hook. This ID is used to reference and manage the specific stack instance within the system, enabling precise tracking and manipulation of resources or configurations tied to that stack during post-processing operations.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used during post-map hook execution to associate actions with the correct stack.\n- Immutable once set to ensure consistent reference throughout the lifecycle.\n- Required for operations that involve stack-specific context or resource management.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be validated for format and existence before use.\n- Ensure secure handling to prevent unauthorized access or manipulation.\n- Typically generated by the system and passed to hooks automatically.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for linking hook operations to the correct stack context.\n- Incorrect or missing _stackId can lead to failed or misapplied post-map operations.\n- Should not be altered manually once assigned.\n\n**Dependency chain**\n- Depends on the stack creation or registration process that generates the ID.\n- Used by post-map hooks to perform stack-specific logic.\n- May influence downstream processes that rely on stack context.\n\n**Technical details**\n- Typically a string value.\n- Format may vary depending on the system's stack naming conventions.\n- Stored and transmitted securely to maintain integrity.\n- Used as a key in database queries or API calls related to stack management."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the postMap hook. This object allows customization of the hook's execution by specifying various options and values that control its operation.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the postMap hook.\n- Determines how the postMap hook processes data after mapping.\n- Can include flags, thresholds, or other parameters influencing hook logic.\n- Optional or required depending on the specific implementation of the postMap hook.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema before use.\n- Ensure all required fields within the configuration are present and correctly typed.\n- Use default values for any missing optional parameters to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n\n**Examples**\n- `{ \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"timeout\": 5000, \"retryOnFailure\": false }`\n- `{ \"mappingStrategy\": \"strict\", \"caseSensitive\": true }`\n\n**Important notes**\n- Incorrect configuration values may cause the postMap hook to fail or behave unexpectedly.\n- Configuration should be tailored to the specific needs of the postMap operation.\n- Changes to configuration may require restarting or reinitializing the hook process.\n\n**Dependency chain**\n- Depends on the postMap hook being invoked.\n- May influence downstream processes that rely on the output of the postMap hook.\n- Interacts with other hook configurations if multiple hooks are chained.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and validated at runtime before the hook executes.\n- Supports nested objects and arrays if the hook logic requires complex configurations."}},"description":"A hook function that is executed after the mapping process is completed. This function allows for custom logic to be applied to the mapped data, enabling modifications, validations, or side effects based on the results of the mapping operation. It receives the mapped data as input and can return a modified version or perform asynchronous operations.\n\n**Field behavior**\n- Invoked immediately after the mapping process finishes.\n- Receives the output of the mapping as its input parameter.\n- Can modify, augment, or validate the mapped data.\n- Supports synchronous or asynchronous execution.\n- The returned value from this hook replaces or updates the mapped data.\n\n**Implementation guidance**\n- Implement as a function that accepts the mapped data object.\n- Ensure proper handling of asynchronous operations if needed.\n- Use this hook to enforce business rules or data transformations post-mapping.\n- Avoid side effects that could interfere with subsequent processing unless intentional.\n- Validate the integrity of the data before returning it.\n\n**Examples**\n- Adjusting date formats or normalizing strings after mapping.\n- Adding computed properties based on mapped fields.\n- Logging or auditing mapped data for monitoring purposes.\n- Filtering out unwanted fields or entries from the mapped result.\n- Triggering notifications or events based on mapped data content.\n\n**Important notes**\n- This hook is optional and only invoked if defined.\n- Errors thrown within this hook may affect the overall mapping operation.\n- The hook should be performant to avoid slowing down the mapping pipeline.\n- Returned data must conform to expected schema to prevent downstream errors.\n\n**Dependency chain**\n- Depends on the completion of the primary mapping function.\n- May influence subsequent processing steps that consume the mapped data.\n- Should be coordinated with pre-mapping hooks to maintain data consistency.\n\n**Technical details**\n- Typically implemented as a callback or promise-returning function.\n- Receives one argument: the mapped data object.\n- Returns either the modified data object or a promise resolving to it.\n- Integrated into the mapping lifecycle after the main mapping logic completes."},"postSubmit":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed after the submission process completes. This function is typically used to perform post-submission tasks such as data processing, notifications, or cleanup operations.\n\n**Field behavior**\n- Invoked automatically after the main submission event finishes.\n- Can trigger additional workflows or side effects based on submission results.\n- Supports asynchronous execution depending on the implementation.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function within the execution context.\n- Validate that the function handles errors gracefully to avoid disrupting the overall submission flow.\n- Document the expected input parameters and output behavior of the function for maintainability.\n\n**Examples**\n- \"sendConfirmationEmail\"\n- \"logSubmissionData\"\n- \"updateUserStatus\"\n\n**Important notes**\n- The function must be idempotent if the submission process can be retried.\n- Avoid long-running operations within this function to prevent blocking the submission pipeline.\n- Security considerations should be taken into account, especially if the function interacts with external systems.\n\n**Dependency chain**\n- Depends on the successful completion of the submission event.\n- May rely on data generated or modified during the submission process.\n- Could trigger downstream hooks or events based on its execution.\n\n**Technical details**\n- Typically referenced by name as a string.\n- Execution context must have access to the function definition.\n- May support both synchronous and asynchronous invocation patterns depending on the platform."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the submission process completes. This ID links the post-submit hook to a specific script that performs additional operations or custom logic once the main submission workflow has finished.\n\n**Field behavior**\n- Specifies which script is triggered automatically after the submission event.\n- Must correspond to a valid and existing script within the system.\n- Controls post-processing actions such as notifications, data transformations, or logging.\n\n**Implementation guidance**\n- Ensure the script ID is correctly referenced and the script is deployed before assigning this property.\n- Validate the script’s permissions and runtime environment compatibility.\n- Use consistent naming or ID conventions to avoid conflicts or errors.\n\n**Examples**\n- \"script_12345\"\n- \"postSubmitCleanupScript\"\n- \"notifyUserScript\"\n\n**Important notes**\n- If the script ID is invalid or missing, the post-submit hook will not execute.\n- Changes to the script ID require redeployment or configuration updates.\n- The script should be idempotent and handle errors gracefully to avoid disrupting the submission flow.\n\n**Dependency chain**\n- Depends on the existence of the script resource identified by this ID.\n- Linked to the post-submit hook configuration within the submission workflow.\n- May interact with other hooks or system components triggered after submission.\n\n**Technical details**\n- Typically represented as a string identifier.\n- Must be unique within the scope of available scripts.\n- Used by the system to locate and invoke the corresponding script at runtime."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-submit hook. This ID is used to reference and manage the specific stack instance within the system, enabling operations such as updates, deletions, or status checks after a submission event.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used to link post-submit actions to the correct stack.\n- Typically immutable once assigned to ensure consistent referencing.\n- Required for executing stack-specific post-submit logic.\n\n**Implementation guidance**\n- Ensure the ID is generated following the system’s unique identification standards.\n- Validate the presence and format of the stack ID before processing post-submit hooks.\n- Use this ID to fetch or manipulate stack-related data during post-submit operations.\n- Handle cases where the stack ID may not exist or is invalid gracefully.\n\n**Examples**\n- \"stack-12345\"\n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/abcde12345\"\n- \"proj-stack-v2\"\n\n**Important notes**\n- This ID must correspond to an existing stack within the environment.\n- Incorrect or missing stack IDs can cause post-submit hooks to fail.\n- The stack ID is critical for audit trails and debugging post-submit processes.\n\n**Dependency chain**\n- Depends on the stack creation or registration process to generate the ID.\n- Used by post-submit hook handlers to identify the target stack.\n- May be referenced by logging, monitoring, or notification systems post-submission.\n\n**Technical details**\n- Typically a string value.\n- May follow a specific format such as UUID, ARN, or custom naming conventions.\n- Stored and transmitted securely to prevent unauthorized access or manipulation.\n- Should be indexed in databases for efficient retrieval during post-submit operations."},"configuration":{"type":"object","description":"Configuration settings for the post-submit hook that define its behavior and parameters. This object contains key-value pairs specifying how the hook should operate after a submission event, including any necessary options, flags, or environment variables.\n\n**Field behavior**\n- Determines the specific actions and parameters used by the post-submit hook.\n- Can include nested settings tailored to the hook’s functionality.\n- Modifies the execution flow or output based on provided configuration values.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema for the post-submit hook.\n- Support extensibility to allow additional parameters without breaking existing functionality.\n- Ensure sensitive information within the configuration is handled securely.\n- Provide clear error messages if required configuration fields are missing or invalid.\n\n**Examples**\n- Setting a timeout duration for the post-submit process.\n- Specifying environment variables needed during hook execution.\n- Enabling or disabling certain features of the post-submit hook via flags.\n\n**Important notes**\n- The configuration must align with the capabilities of the post-submit hook implementation.\n- Incorrect or incomplete configuration may cause the hook to fail or behave unexpectedly.\n- Configuration changes typically require validation and testing before deployment.\n\n**Dependency chain**\n- Depends on the post-submit hook being triggered successfully.\n- May interact with other hooks or system components based on configuration.\n- Relies on the underlying system to interpret and apply configuration settings correctly.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Supports nested objects and arrays for complex configurations.\n- May include string, numeric, boolean, or null values depending on parameters.\n- Parsed and applied at runtime during the post-submit hook execution phase."}},"description":"A callback function or hook that is executed immediately after a form submission process completes. This hook allows for custom logic to be run post-submission, such as handling responses, triggering notifications, updating UI elements, or performing cleanup tasks.\n\n**Field behavior**\n- Invoked only after the form submission has finished, regardless of success or failure.\n- Receives submission result data or error information as arguments.\n- Can be asynchronous to support operations like API calls or state updates.\n- Does not affect the submission process itself but handles post-submission side effects.\n\n**Implementation guidance**\n- Ensure the function handles both success and error scenarios gracefully.\n- Avoid long-running synchronous operations to prevent blocking the UI.\n- Use this hook to trigger any follow-up actions such as analytics tracking or user feedback.\n- Validate that the hook is defined before invocation to prevent runtime errors.\n\n**Examples**\n- Logging submission results to the console.\n- Displaying a success message or error notification to the user.\n- Redirecting the user to a different page after submission.\n- Resetting form fields or updating application state based on submission outcome.\n\n**Important notes**\n- This hook is optional; if not provided, no post-submission actions will be performed.\n- It should not be used to modify the submission data itself; that should be handled before submission.\n- Proper error handling within this hook is crucial to avoid unhandled exceptions.\n\n**Dependency chain**\n- Depends on the form submission process completing.\n- May interact with state management or UI components updated after submission.\n- Can be linked with pre-submit hooks for comprehensive form lifecycle management.\n\n**Technical details**\n- Typically implemented as a function accepting parameters such as submission response and error.\n- Can return a promise to support asynchronous operations.\n- Should be registered in the form configuration under the hooks.postSubmit property.\n- Execution context may vary depending on the form library or framework used."},"postAggregate":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the post-aggregation hook phase. This function is invoked after the aggregation process completes, allowing for custom processing, transformation, or validation of the aggregated data before it is returned or further processed. The function should be designed to handle the aggregated data structure and can modify, enrich, or analyze the results as needed.\n\n**Field behavior**\n- Executed after the aggregation operation finishes.\n- Receives the aggregated data as input.\n- Can modify or augment the aggregated results.\n- Supports custom logic tailored to specific post-processing requirements.\n\n**Implementation guidance**\n- Ensure the function signature matches the expected input and output formats.\n- Handle potential errors within the function to avoid disrupting the overall aggregation flow.\n- Keep the function efficient to minimize latency in the post-aggregation phase.\n- Validate the output of the function to maintain data integrity.\n\n**Examples**\n- A function that filters aggregated results based on certain criteria.\n- A function that calculates additional metrics from the aggregated data.\n- A function that formats or restructures the aggregated output for downstream consumption.\n\n**Important notes**\n- The function must be deterministic and side-effect free to ensure consistent results.\n- Avoid long-running operations within the function to prevent performance bottlenecks.\n- The function should not alter the original data source, only the aggregated output.\n\n**Dependency chain**\n- Depends on the completion of the aggregation process.\n- May rely on the schema or structure of the aggregated data.\n- Can be linked to subsequent hooks or processing steps that consume the modified data.\n\n**Technical details**\n- Typically implemented as a callback or lambda function.\n- May be written in the same language as the aggregation engine or as a supported scripting language.\n- Should conform to the API’s expected interface for hook functions.\n- Execution context may include metadata about the aggregation operation."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the aggregation process completes. This ID links the post-aggregation hook to a specific script that contains the logic or operations to be performed. It ensures that the correct script is triggered in the post-aggregation phase, enabling customized processing or data manipulation.\n\n**Field behavior**\n- Accepts a string representing the script's unique ID.\n- Used exclusively in the post-aggregation hook context.\n- Triggers the execution of the associated script after aggregation.\n- Must correspond to an existing and accessible script within the system.\n\n**Implementation guidance**\n- Validate that the script ID exists and is active before assignment.\n- Ensure the script linked by this ID is compatible with post-aggregation data.\n- Handle errors gracefully if the script ID is invalid or the script execution fails.\n- Maintain security by restricting script IDs to authorized scripts only.\n\n**Examples**\n- \"script12345\"\n- \"postAggScript_v2\"\n- \"cleanup_after_aggregation\"\n\n**Important notes**\n- The script ID must be unique within the scope of available scripts.\n- Changing the script ID will alter the behavior of the post-aggregation hook.\n- The script referenced should be optimized for performance to avoid delays.\n- Ensure proper permissions are set for the script to execute in this context.\n\n**Dependency chain**\n- Depends on the existence of a script repository or registry.\n- Relies on the aggregation process completing successfully before execution.\n- Interacts with the postAggregate hook mechanism to trigger execution.\n\n**Technical details**\n- Typically represented as a string data type.\n- May be stored in a database or configuration file referencing script metadata.\n- Used by the system's execution engine to locate and run the script.\n- Supports integration with scripting languages or environments supported by the platform."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-aggregation hook. This ID is used to reference and manage the specific stack context within which the hook operates, ensuring that the correct stack resources and configurations are applied during the post-aggregation process.\n\n**Field behavior**\n- Uniquely identifies the stack instance related to the post-aggregation hook.\n- Used to link the hook execution context to the appropriate stack environment.\n- Immutable once set to maintain consistency throughout the hook lifecycle.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be assigned automatically by the system or provided explicitly during hook configuration.\n- Ensure proper validation to prevent referencing non-existent or unauthorized stacks.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for resolving stack-specific resources and permissions.\n- Incorrect or missing _stackId values can lead to hook execution failures.\n- Should be handled securely to prevent unauthorized access to stack data.\n\n**Dependency chain**\n- Depends on the stack management system to generate and maintain stack IDs.\n- Utilized by the post-aggregation hook execution engine to retrieve stack context.\n- May influence downstream processes that rely on stack-specific configurations.\n\n**Technical details**\n- Typically represented as a string.\n- Format and length may vary depending on the stack management system.\n- Should be indexed or cached for efficient lookup during hook execution."},"configuration":{"type":"object","description":"Configuration settings for the post-aggregate hook that define its behavior and parameters during execution. This property allows customization of the hook's operation by specifying key-value pairs or structured data relevant to the hook's logic.\n\n**Field behavior**\n- Accepts structured data or key-value pairs that tailor the hook's functionality.\n- Influences how the post-aggregate hook processes data after aggregation.\n- Can be optional or required depending on the specific hook implementation.\n- Supports dynamic adjustment of hook behavior without code changes.\n\n**Implementation guidance**\n- Validate configuration inputs to ensure they conform to expected formats and types.\n- Document all configurable options clearly for users to understand their impact.\n- Allow for extensibility to accommodate future configuration parameters.\n- Ensure secure handling of configuration data to prevent injection or misuse.\n\n**Examples**\n- {\"threshold\": 10, \"mode\": \"strict\"}\n- {\"enableLogging\": true, \"retryCount\": 3}\n- {\"filters\": [\"status:active\", \"region:us-east\"]}\n\n**Important notes**\n- Incorrect configuration values may cause the hook to malfunction or produce unexpected results.\n- Configuration should be versioned or tracked to maintain consistency across deployments.\n- Sensitive information should not be included in configuration unless properly encrypted or secured.\n\n**Dependency chain**\n- Depends on the specific post-aggregate hook implementation to interpret configuration.\n- May interact with other hook properties such as conditions or triggers.\n- Configuration changes can affect downstream processing or output of the aggregation.\n\n**Technical details**\n- Typically represented as a JSON object or map structure.\n- Parsed and applied at runtime when the post-aggregate hook is invoked.\n- Supports nested structures for complex configuration scenarios.\n- Should be compatible with the overall API schema and validation rules."}},"description":"A hook function that is executed after the aggregate operation has completed. This hook allows for custom logic to be applied to the aggregated results before they are returned or further processed. It is typically used to modify, filter, or augment the aggregated data based on specific application requirements.\n\n**Field behavior**\n- Invoked immediately after the aggregate query execution.\n- Receives the aggregated results as input.\n- Can modify or replace the aggregated data.\n- Supports asynchronous operations if needed.\n- Does not affect the execution of the aggregate operation itself.\n\n**Implementation guidance**\n- Ensure the hook handles errors gracefully to avoid disrupting the response.\n- Use this hook to implement custom transformations or validations on aggregated data.\n- Avoid heavy computations to maintain performance.\n- Return the modified data or the original data if no changes are needed.\n- Consider security implications when modifying aggregated results.\n\n**Examples**\n- Filtering out sensitive fields from aggregated results.\n- Adding computed properties based on aggregation output.\n- Logging or auditing aggregated data for monitoring purposes.\n- Transforming aggregated data into a different format before sending to clients.\n\n**Important notes**\n- This hook is specific to aggregate operations and will not trigger on other query types.\n- Modifications in this hook do not affect the underlying database.\n- The hook should return the final data to be used downstream.\n- Properly handle asynchronous code to ensure the hook completes before response.\n\n**Dependency chain**\n- Triggered after the aggregate query execution phase.\n- Can influence the data passed to subsequent hooks or response handlers.\n- Dependent on the successful completion of the aggregate operation.\n\n**Technical details**\n- Typically implemented as a function accepting the aggregated results and context.\n- Supports both synchronous and asynchronous function signatures.\n- Receives parameters such as the aggregated data, query context, and hook metadata.\n- Expected to return the processed aggregated data or a promise resolving to it."}},"description":"Defines a collection of hook functions or callbacks that are triggered at specific points during the execution lifecycle of a process or operation. These hooks allow customization and extension of default behavior by executing user-defined logic before, after, or during certain events.\n\n**Field behavior**\n- Supports multiple hook functions, each associated with a particular event or lifecycle stage.\n- Hooks are invoked automatically by the system when their corresponding event occurs.\n- Can be used to modify input data, handle side effects, perform validations, or trigger additional processes.\n- Execution order of hooks may be sequential or parallel depending on implementation.\n\n**Implementation guidance**\n- Ensure hooks are registered with clear event names or identifiers.\n- Validate hook functions for correct signature and expected parameters.\n- Provide error handling within hooks to avoid disrupting the main process.\n- Document available hook points and expected behavior for each.\n- Allow hooks to be asynchronous if the environment supports it.\n\n**Examples**\n- A \"beforeSave\" hook that validates data before saving to a database.\n- An \"afterFetch\" hook that formats or enriches data after retrieval.\n- A \"beforeDelete\" hook that checks permissions before allowing deletion.\n- A \"onError\" hook that logs errors or sends notifications.\n\n**Important notes**\n- Hooks should not introduce significant latency or side effects that impact core functionality.\n- Avoid circular dependencies or infinite loops caused by hooks triggering each other.\n- Security considerations must be taken into account when executing user-defined hooks.\n- Hooks may have access to sensitive data; ensure proper access controls.\n\n**Dependency chain**\n- Hooks depend on the lifecycle events or triggers defined by the system.\n- Hook execution may depend on the successful completion of prior steps.\n- Hook results can influence subsequent processing stages or final outcomes.\n\n**Technical details**\n- Typically implemented as functions, methods, or callbacks registered in a map or list.\n- May support synchronous or asynchronous execution models.\n- Can accept parameters such as context objects, event data, or state information.\n- Return values from hooks may be used to modify behavior or data flow.\n- Often integrated with event emitter or observer patterns."},"sampleResponseData":{"type":"object","description":"Contains example data representing a typical response returned by the API endpoint. This sample response data helps developers understand the structure, format, and types of values they can expect when interacting with the API. It serves as a reference for testing, debugging, and documentation purposes.\n\n**Field behavior**\n- Represents a static example of the API's response payload.\n- Should reflect the most common or typical response scenario.\n- May include nested objects, arrays, and various data types to illustrate the response structure.\n- Does not affect the actual API behavior or response generation.\n\n**Implementation guidance**\n- Ensure the sample data is accurate and up-to-date with the current API response schema.\n- Include all required fields and typical optional fields to provide a comprehensive example.\n- Use realistic values that clearly demonstrate the data format and constraints.\n- Update the sample response whenever the API response structure changes.\n\n**Examples**\n- A JSON object showing user details returned from a user info endpoint.\n- An array of product items with fields like id, name, price, and availability.\n- A nested object illustrating a complex response with embedded metadata and links.\n\n**Important notes**\n- This data is for illustrative purposes only and should not be used as actual input or output.\n- It helps consumers of the API to quickly grasp the expected response format.\n- Should be consistent with the API specification and schema definitions.\n\n**Dependency chain**\n- Depends on the API endpoint's response schema and data model.\n- Should align with any validation rules or constraints defined for the response.\n- May be linked to example requests or other documentation elements for completeness.\n\n**Technical details**\n- Typically formatted as a JSON or XML snippet matching the API's response content type.\n- May include placeholder values or sample identifiers.\n- Should be parsable and valid according to the API's response schema."},"responseTransform":{"allOf":[{"description":"Data transformation configuration for reshaping API response data during import operations.\n\n**Import-specific behavior**\n\n**Response Transformation**: Transforms the response data returned by the destination system after records have been imported. This allows you to reshape, filter, or enrich the response before it is processed by downstream flow steps or returned to the client.\n\n**Common Use Cases**:\n- Extracting relevant fields from verbose API responses\n- Normalizing response formats across different destination systems\n- Adding computed fields based on response data\n- Filtering out sensitive information before logging or further processing\n"},{"$ref":"#/components/schemas/Transform"}]},"modelMetadata":{"type":"object","description":"Contains detailed metadata about the model, including its version, architecture, training data characteristics, and any relevant configuration parameters that define its behavior and capabilities. This information helps in understanding the model's provenance, performance expectations, and compatibility with various tasks or environments.\n**Field behavior**\n- Captures comprehensive details about the model's identity and configuration.\n- May include version numbers, architecture type, training dataset descriptions, and hyperparameters.\n- Used for tracking model lineage and ensuring reproducibility.\n- Supports validation and compatibility checks before deployment or usage.\n**Implementation guidance**\n- Ensure metadata is accurate and up-to-date with the current model iteration.\n- Include standardized fields for easy parsing and comparison across models.\n- Use consistent formatting and naming conventions for metadata attributes.\n- Allow extensibility to accommodate future metadata elements without breaking compatibility.\n**Examples**\n- Model version: \"v1.2.3\"\n- Architecture: \"Transformer-based, 12 layers, 768 hidden units\"\n- Training data: \"Dataset XYZ, 1 million samples, balanced classes\"\n- Hyperparameters: \"learning_rate=0.001, batch_size=32\"\n**Important notes**\n- Metadata should be immutable once the model is finalized to ensure traceability.\n- Sensitive information such as proprietary training data details should be handled carefully.\n- Metadata completeness directly impacts model governance and audit processes.\n- Incomplete or inaccurate metadata can lead to misuse or misinterpretation of the model.\n**Dependency chain**\n- Relies on accurate model training and version control systems.\n- Interacts with deployment pipelines that validate model compatibility.\n- Supports monitoring systems that track model performance over time.\n**Technical details**\n- Typically represented as a structured object or JSON document.\n- May include nested fields for complex metadata attributes.\n- Should be easily serializable and deserializable for storage and transmission.\n- Can be linked to external documentation or repositories for extended information."},"mapping":{"type":"object","properties":{"fields":{"type":"string","enum":["string","number","boolean","numberarray","stringarray","json"],"description":"Defines the collection of individual field mappings within the overall mapping configuration. Each entry specifies the characteristics, data types, and indexing options for a particular field in the dataset or document structure. This property enables precise control over how each field is interpreted, stored, and queried by the system.\n\n**Field behavior**\n- Contains a set of key-value pairs where each key is a field name and the value is its mapping definition.\n- Determines how data in each field is processed, indexed, and searched.\n- Supports nested fields and complex data structures.\n- Can include settings such as data type, analyzers, norms, and indexing options.\n\n**Implementation guidance**\n- Define all relevant fields explicitly to optimize search and storage behavior.\n- Use consistent naming conventions for field names.\n- Specify appropriate data types to ensure correct parsing and querying.\n- Include nested mappings for objects or arrays as needed.\n- Validate field definitions to prevent conflicts or errors.\n\n**Examples**\n- Mapping a text field with a custom analyzer.\n- Defining a date field with a specific format.\n- Specifying a keyword field for exact match searches.\n- Creating nested object fields with their own sub-fields.\n\n**Important notes**\n- Omitting fields may lead to default dynamic mapping behavior, which might not be optimal.\n- Incorrect field definitions can cause indexing errors or unexpected query results.\n- Changes to field mappings often require reindexing of existing data.\n- Field names should avoid reserved characters or keywords.\n\n**Dependency chain**\n- Depends on the overall mapping configuration context.\n- Influences indexing and search components downstream.\n- Interacts with analyzers, tokenizers, and query parsers.\n\n**Technical details**\n- Typically represented as a JSON or YAML object with field names as keys.\n- Each field mapping includes properties like \"type\", \"index\", \"analyzer\", \"fields\", etc.\n- Supports complex types such as objects, nested, geo_point, and geo_shape.\n- May include metadata fields for internal use."},"lists":{"type":"string","enum":["string","number","boolean","numberarray","stringarray"],"description":"A collection of lists that define specific groupings or categories used within the mapping context. Each list contains a set of related items or values that are referenced to organize, filter, or map data effectively. These lists facilitate structured data handling and improve the clarity and maintainability of the mapping configuration.\n\n**Field behavior**\n- Contains multiple named lists, each representing a distinct category or grouping.\n- Lists can be referenced elsewhere in the mapping to apply consistent logic or transformations.\n- Supports dynamic or static content depending on the mapping requirements.\n- Enables modular and reusable data definitions within the mapping.\n\n**Implementation guidance**\n- Ensure each list has a unique identifier or name for clear referencing.\n- Populate lists with relevant and validated items to avoid mapping errors.\n- Use lists to centralize repeated values or categories to simplify updates.\n- Consider the size and complexity of lists to maintain performance and readability.\n\n**Examples**\n- A list of country codes used for regional mapping.\n- A list of product categories for classification purposes.\n- A list of status codes to standardize state representation.\n- A list of user roles for access control mapping.\n\n**Important notes**\n- Lists should be kept up-to-date to reflect current data requirements.\n- Avoid duplication of items across different lists unless intentional.\n- The structure and format of list items must align with the overall mapping schema.\n- Changes to lists may impact dependent mapping logic; test thoroughly after updates.\n\n**Dependency chain**\n- Lists may depend on external data sources or configuration files.\n- Other mapping properties or rules may reference these lists for validation or transformation.\n- Updates to lists can cascade to affect downstream processing or output.\n\n**Technical details**\n- Typically represented as arrays or collections within the mapping schema.\n- Items within lists can be simple values (strings, numbers) or complex objects.\n- Supports nesting or hierarchical structures if the schema allows.\n- May include metadata or annotations to describe list purpose or usage."}},"description":"Defines the association between input data fields and their corresponding target fields or processing rules within the system. This mapping specifies how data should be transformed, routed, or interpreted during processing to ensure accurate and consistent handling. It serves as a blueprint for data translation and integration tasks, enabling seamless interoperability between different data formats or components.\n\n**Field behavior**\n- Maps source fields to target fields or processing instructions.\n- Determines how input data is transformed or routed.\n- Can include nested or hierarchical mappings for complex data structures.\n- Supports conditional or dynamic mapping rules based on input values.\n\n**Implementation guidance**\n- Ensure mappings are clearly defined and validated to prevent data loss or misinterpretation.\n- Use consistent naming conventions for source and target fields.\n- Support extensibility to accommodate new fields or transformation rules.\n- Provide mechanisms for error handling when mappings fail or are incomplete.\n\n**Examples**\n- Mapping a JSON input field \"user_name\" to a database column \"username\".\n- Defining a transformation rule that converts date formats from \"MM/DD/YYYY\" to \"YYYY-MM-DD\".\n- Routing data from an input field \"status\" to different processing modules based on its value.\n\n**Important notes**\n- Incorrect or incomplete mappings can lead to data corruption or processing errors.\n- Mapping definitions should be version-controlled to track changes over time.\n- Consider performance implications when applying complex or large-scale mappings.\n\n**Dependency chain**\n- Dependent on the structure and schema of input data.\n- Influences downstream data processing, validation, and storage components.\n- May interact with transformation, validation, and routing modules.\n\n**Technical details**\n- Typically represented as key-value pairs, dictionaries, or mapping objects.\n- May support expressions or functions for dynamic value computation.\n- Can be serialized in formats such as JSON, YAML, or XML for configuration.\n- Should include metadata for data types, optionality, and default values where applicable."},"mappings":{"allOf":[{"description":"Field mapping configuration for transforming incoming records during import operations.\n\n**Import-specific behavior**\n\n**Data Transformation**: Mappings define how fields from incoming records are transformed and mapped to the destination system's field structure. This enables data normalization, field renaming, value transformation, and complex nested object construction.\n\n**Common Use Cases**:\n- Mapping source field names to destination field names\n- Transforming data types and formats\n- Building nested object structures required by the destination\n- Applying formulas and lookups to derive field values\n"},{"$ref":"#/components/schemas/Mappings"}]},"lookups":{"type":"string","description":"A collection of predefined reference data or mappings used to standardize and validate input values within the system. Lookups typically consist of key-value pairs or enumerations that facilitate consistent data interpretation and reduce errors by providing a controlled vocabulary or set of options.\n\n**Field behavior**\n- Serves as a centralized source for reference data used across various components.\n- Enables validation and normalization of input values against predefined sets.\n- Supports dynamic retrieval of lookup values to accommodate updates without code changes.\n- May include hierarchical or categorized data structures for complex reference data.\n\n**Implementation guidance**\n- Ensure lookups are comprehensive and cover all necessary reference data for the application domain.\n- Design lookups to be easily extendable and maintainable, allowing additions or modifications without impacting existing functionality.\n- Implement caching mechanisms if lookups are frequently accessed to optimize performance.\n- Provide clear documentation for each lookup entry to aid developers and users in understanding their purpose.\n\n**Examples**\n- Country codes and names (e.g., \"US\" -> \"United States\").\n- Status codes for order processing (e.g., \"PENDING\", \"SHIPPED\", \"DELIVERED\").\n- Currency codes and symbols (e.g., \"USD\" -> \"$\").\n- Product categories or types used in an inventory system.\n\n**Important notes**\n- Lookups should be kept up-to-date to reflect changes in business rules or external standards.\n- Avoid hardcoding lookup values in multiple places; centralize them to maintain consistency.\n- Consider localization requirements if lookup values need to be presented in multiple languages.\n- Validate input data against lookups to prevent invalid or unsupported values from entering the system.\n\n**Dependency chain**\n- Often used by input validation modules, UI dropdowns, reporting tools, and business logic components.\n- May depend on external data sources or configuration files for initialization.\n- Can influence downstream processing and data storage by enforcing standardized values.\n\n**Technical details**\n- Typically implemented as dictionaries, maps, or database tables.\n- May support versioning to track changes over time.\n- Can be exposed via APIs for dynamic retrieval by client applications.\n- Should include mechanisms for synchronization if distributed across multiple systems."},"settingsForm":{"$ref":"#/components/schemas/Form"},"preSave":{"$ref":"#/components/schemas/PreSave"},"settings":{"$ref":"#/components/schemas/Settings"}}},"As2":{"type":"object","description":"Configuration for As2 imports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"The unique identifier for the Trading Partner (TP) Connector utilized within the AS2 communication framework. This identifier serves as a critical reference that links the AS2 configuration to the specific connector responsible for the secure transmission and reception of AS2 messages. It ensures that all message exchanges are accurately routed through the designated connector, thereby maintaining the integrity, security, and reliability of the communication process. This ID is fundamental for establishing, managing, and terminating AS2 sessions, enabling proper encryption, digital signing, and delivery confirmation of messages. It acts as a key element in the orchestration of AS2 workflows, ensuring seamless interoperability between trading partners.\n\n**Field behavior**\n- Uniquely identifies a TP Connector within the trading partner ecosystem.\n- Associates AS2 message exchanges with the appropriate connector configuration.\n- Essential for initiating, maintaining, and terminating AS2 communication sessions.\n- Immutable once set to preserve consistent routing and session management.\n- Validated against existing connectors to prevent misconfiguration.\n\n**Implementation guidance**\n- Must correspond to a valid, active connector ID registered in the trading partner management system.\n- Should be assigned during initial AS2 setup and remain unchanged unless a deliberate reconfiguration is required.\n- Implement validation checks to confirm the ID’s existence and compatibility with AS2 protocols before assignment.\n- Ensure synchronization between the connector ID and its configuration to avoid communication failures.\n- Handle updates cautiously, with appropriate notifications and fallback mechanisms to maintain session continuity.\n\n**Examples**\n- \"connector-12345\"\n- \"tp-connector-67890\"\n- \"as2-connector-001\"\n\n**Important notes**\n- Incorrect or invalid IDs can lead to failed message transmissions or misrouted communications.\n- The referenced connector must be fully configured to support AS2 standards, including security and protocol settings.\n- Changes to this identifier should be managed through controlled processes to prevent disruption of active AS2 sessions.\n- This ID is integral to the AS2 message lifecycle, impacting message encryption, signing, and delivery confirmation.\n\n**Dependency chain**\n- Relies on the existence and proper configuration of a TP Connector entity within the trading partner management system.\n- Interacts closely with AS2 configuration parameters that govern message handling, security credentials, and session management.\n- Dependent on system components responsible for connector registration, validation, and lifecycle management.\n\n**Technical details**\n- Represented as a string conforming to the identifier format defined by the trading partner management system.\n- Used internally by the AS2"},"fileNameTemplate":{"type":"string","description":"fileNameTemplate specifies the template string used to dynamically generate file names for AS2 message payloads during both transmission and receipt processes. This template allows the inclusion of customizable placeholders that are replaced at runtime with relevant contextual values such as timestamps, unique message identifiers, sender and receiver IDs, or other metadata. By leveraging these placeholders, the system ensures that generated file names are unique, descriptive, and meaningful, facilitating efficient file management, traceability, and downstream processing. The template is crucial for systematically organizing files, preventing naming conflicts, supporting audit trails, and enabling seamless integration with various file storage and transfer mechanisms.\n\n**Field behavior**\n- Defines the naming pattern and structure for AS2 message payload files.\n- Supports dynamic substitution of placeholders with runtime metadata values.\n- Ensures uniqueness and clarity in generated file names to prevent collisions.\n- Directly influences file storage organization, retrieval efficiency, and processing workflows.\n- Affects audit trails and logging by enforcing consistent and meaningful file naming conventions.\n- Applies uniformly to both outgoing and incoming AS2 message payloads to maintain consistency.\n- Allows flexibility to adapt file naming to organizational standards and compliance requirements.\n\n**Implementation guidance**\n- Use clear, descriptive placeholders such as {timestamp}, {messageId}, {senderId}, {receiverId}, and {date} to capture relevant metadata.\n- Validate templates to exclude invalid or unsupported characters based on the target file system’s constraints.\n- Incorporate date and time components to improve uniqueness and enable chronological sorting of files.\n- Explicitly include file extensions (e.g., .xml, .edi, .txt) to reflect the payload format and ensure compatibility with downstream systems.\n- Provide sensible default templates to maintain system functionality when custom templates are not specified.\n- Implement robust parsing and substitution logic to reliably handle all supported placeholders and edge cases.\n- Sanitize substituted values to prevent injection of unsafe, invalid, or reserved characters that could cause errors.\n- Consider locale and timezone settings when generating date/time placeholders to ensure consistency across systems.\n- Support escape sequences or delimiter customization to handle special characters within templates if necessary.\n\n**Examples**\n- \"AS2_{senderId}_{timestamp}.xml\"\n- \"{messageId}_{receiverId}_{date}.edi\"\n- \"payload_{timestamp}.txt\"\n- \"msg_{date}_{senderId}_{receiverId}_{messageId}.xml\"\n- \"AS2_{timestamp}_{messageId}.payload\"\n- \"inbound_{receiverId}_{date}_{messageId}.xml\"\n-"},"messageIdTemplate":{"type":"string","description":"messageIdTemplate is a customizable template string designed to generate the Message-ID header for AS2 messages, enabling the creation of unique, meaningful, and standards-compliant identifiers for each message. This template supports dynamic placeholders that are replaced with actual runtime values—such as timestamps, UUIDs, sequence numbers, sender-specific information, or other contextual data—ensuring that every Message-ID is globally unique and strictly adheres to RFC 5322 email header formatting standards. By defining the precise format of the Message-ID, this property plays a critical role in message tracking, logging, troubleshooting, auditing, and interoperability within AS2 communication workflows, facilitating reliable message correlation and lifecycle management.\n\n**Field behavior**\n- Specifies the exact format and structure of the Message-ID header in AS2 protocol messages.\n- Supports dynamic placeholders that are substituted with real-time values during message creation.\n- Guarantees global uniqueness of each Message-ID to prevent duplication and enable accurate message identification.\n- Directly influences message correlation, tracking, diagnostics, and auditing processes across AS2 systems.\n- Ensures compliance with email header syntax and AS2 protocol requirements to maintain interoperability.\n- Affects downstream processing by systems that rely on Message-ID for message lifecycle management.\n\n**Implementation guidance**\n- Use a clear and consistent placeholder syntax (e.g., `${placeholderName}`) for dynamic content insertion.\n- Incorporate unique elements such as UUIDs, timestamps, sequence numbers, sender domains, or other identifiers to guarantee global uniqueness.\n- Validate the generated Message-ID string against AS2 protocol specifications and RFC 5322 email header standards.\n- Ensure the final Message-ID is properly formatted, enclosed in angle brackets (`< >`), and free from invalid or disallowed characters.\n- Avoid embedding sensitive or confidential information within the Message-ID to mitigate potential security risks.\n- Thoroughly test the template to confirm compatibility with receiving AS2 systems and message processing components.\n- Consider the impact of the template on downstream systems that rely on Message-ID for message correlation and tracking.\n- Maintain consistency in template usage across environments to support reliable message auditing and troubleshooting.\n\n**Examples**\n- `<${uuid}@example.com>` — generates a Message-ID combining a UUID with a domain name.\n- `<${timestamp}.${sequence}@as2sender.com>` — includes a timestamp and sequence number for ordered uniqueness.\n- `<msg-${messageNumber}@${senderDomain}>` — uses message number and sender domain placeholders for traceability.\n- `<${date}-${uuid}@company"},"maxRetries":{"type":"number","description":"Maximum number of retry attempts for the AS2 message transmission in case of transient failures such as network errors, timeouts, or temporary unavailability of the recipient system. This setting controls how many times the system will automatically attempt to resend a message after an initial failure before marking the transmission as failed. It applies exclusively to retry attempts following the first transmission and does not influence the initial send operation.\n\n**Field behavior**\n- Specifies the total count of resend attempts after the initial transmission failure.\n- Retries are triggered only for recoverable, transient errors where subsequent attempts have a reasonable chance of success.\n- Retry attempts cease once the configured maximum number is reached, resulting in a final failure status.\n- Does not affect the initial message send; governs only the retry logic after the first failure.\n- Retry attempts are typically spaced out using backoff strategies to prevent overwhelming the network or recipient system.\n- Each retry is initiated only after the previous attempt has conclusively failed.\n- The retry mechanism respects configured backoff intervals and timing constraints to optimize network usage and avoid congestion.\n\n**Implementation guidance**\n- Select a balanced default value to avoid excessive retries that could cause delays or strain system resources.\n- Implement exponential backoff or incremental delay strategies between retries to reduce network congestion and improve success rates.\n- Ensure retry attempts comply with overall operation timeouts and do not exceed system or protocol limits.\n- Validate input to accept only non-negative integers; setting zero disables retries entirely.\n- Integrate retry logic with error detection, logging, and alerting mechanisms for comprehensive failure handling and monitoring.\n- Consider the impact of retries on message ordering and idempotency to prevent duplicate processing or inconsistent states.\n- Provide clear feedback and detailed logging for each retry attempt to facilitate troubleshooting and operational transparency.\n\n**Examples**\n- maxRetries: 3 — The system will attempt to resend the message up to three times after the initial failure before giving up.\n- maxRetries: 0 — No retries will be performed; the failure is reported immediately after the first unsuccessful attempt.\n- maxRetries: 5 — Allows up to five retry attempts, increasing the chance of successful delivery in unstable network conditions.\n- maxRetries: 1 — A single retry attempt is made, suitable for environments with low tolerance for delays.\n\n**Important notes**\n- Excessive retry attempts can increase latency and consume additional system and network resources.\n- Retry behavior should align with AS2 protocol standards and best practices to maintain compliance and interoperability.\n- This setting improves"},"headers":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the MIME type of the AS2 message content, indicating the format of the data being transmitted. This helps the receiving system understand how to process and interpret the message payload.\n\n**Field behavior**\n- Defines the content type of the AS2 message payload.\n- Used by the receiver to determine how to parse and handle the message.\n- Typically corresponds to standard MIME types such as \"application/edi-x12\", \"application/edi-consent\", or \"application/xml\".\n- May influence processing rules, such as encryption, compression, or validation steps.\n\n**Implementation guidance**\n- Ensure the value is a valid MIME type string.\n- Use standard MIME types relevant to AS2 transmissions.\n- Avoid custom or non-standard MIME types unless both sender and receiver explicitly support them.\n- Validate the type value against expected content formats in the AS2 communication agreement.\n\n**Examples**\n- \"application/edi-x12\"\n- \"application/edi-consent\"\n- \"application/xml\"\n- \"text/plain\"\n\n**Important notes**\n- The type field is critical for interoperability between AS2 trading partners.\n- Incorrect or missing type values can lead to message processing errors or rejection.\n- This field is part of the AS2 message headers and should conform to AS2 protocol specifications.\n\n**Dependency chain**\n- Depends on the content of the AS2 message payload.\n- Influences downstream processing components that handle message parsing and validation.\n- May be linked with other headers such as content-transfer-encoding or content-disposition.\n\n**Technical details**\n- Represented as a string in MIME type format.\n- Included in the AS2 message headers under the \"Content-Type\" header.\n- Should comply with RFC 2045 and RFC 2046 MIME standards."},"value":{"type":"string","description":"The value of the AS2 header, representing the content associated with the specified header name.\n\n**Field behavior**\n- Holds the actual content or data for the AS2 header identified by the corresponding header name.\n- Can be a string or any valid data type supported by the AS2 protocol for header values.\n- Used during the construction or parsing of AS2 messages to specify header details.\n\n**Implementation guidance**\n- Ensure the value conforms to the expected format and encoding for AS2 headers.\n- Validate the value against any protocol-specific constraints or length limits.\n- When setting this value, consider the impact on message integrity and compliance with AS2 standards.\n- Support dynamic assignment to accommodate varying header requirements.\n\n**Examples**\n- \"application/edi-x12\"\n- \"12345\"\n- \"attachment; filename=\\\"invoice.xml\\\"\"\n- \"UTF-8\"\n\n**Important notes**\n- The value must be compatible with the corresponding header name to maintain protocol correctness.\n- Incorrect or malformed values can lead to message rejection or processing errors.\n- Some headers may require specific formatting or encoding (e.g., MIME types, character sets).\n\n**Dependency chain**\n- Dependent on the `as2.headers.name` property which specifies the header name.\n- Influences the overall AS2 message headers structure and processing logic.\n\n**Technical details**\n- Typically represented as a string in the AS2 message headers.\n- May require encoding or escaping depending on the header content.\n- Should comply with RFC 4130 and related AS2 specifications for header formatting."},"_id":{"type":"object","description":"Unique identifier for the AS2 message header.\n\n**Field behavior**\n- Serves as a unique key to identify the AS2 message header within the system.\n- Typically assigned automatically by the system or provided by the client to ensure message traceability.\n- Used to correlate messages and track message processing status.\n\n**Implementation guidance**\n- Should be a string value that uniquely identifies the header.\n- Must be immutable once assigned to prevent inconsistencies.\n- Should follow a consistent format (e.g., UUID, GUID) to avoid collisions.\n- Ensure uniqueness across all AS2 message headers in the system.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"AS2Header_001\"\n- \"msg-header-20240427-0001\"\n\n**Important notes**\n- This field is critical for message tracking and auditing.\n- Avoid using easily guessable or sequential values to enhance security.\n- If not provided, the system should generate a unique identifier automatically.\n\n**Dependency chain**\n- May be referenced by other components or logs that track message processing.\n- Related to message metadata and processing status fields.\n\n**Technical details**\n- Data type: string\n- Format: typically UUID or a unique string identifier\n- Immutable after creation\n- Indexed for efficient lookup in databases or message stores"},"default":{"type":["object","null"],"description":"A boolean property that specifies whether the current header should be used as the default header in the AS2 message configuration. When set to true, this header will be applied automatically unless overridden by a more specific header setting.\n\n**Field behavior**\n- Determines if the header is the default choice for AS2 message headers.\n- If true, this header is applied automatically in the absence of other specific header configurations.\n- If false or omitted, the header is not considered the default and must be explicitly specified to be used.\n\n**Implementation guidance**\n- Use this property to designate a fallback or standard header configuration for AS2 messages.\n- Ensure only one header is marked as default to avoid conflicts.\n- Validate the boolean value to accept only true or false.\n- Consider the impact on message routing and processing when setting this property.\n\n**Examples**\n- `default: true` — This header is the default and will be used unless another header is specified.\n- `default: false` — This header is not the default and must be explicitly selected.\n\n**Important notes**\n- Setting multiple headers as default may lead to unpredictable behavior.\n- The default header is applied only when no other header overrides are present.\n- This property is optional; if omitted, the header is not treated as default.\n\n**Dependency chain**\n- Related to other header properties within `as2.headers`.\n- Influences message header selection logic in AS2 message processing.\n- May interact with routing or profile configurations that specify header usage.\n\n**Technical details**\n- Data type: boolean\n- Default value: false (if omitted)\n- Located under the `as2.headers` object in the configuration schema\n- Used during AS2 message construction to determine header application"}},"description":"A collection of HTTP headers included in the AS2 message transmission, serving as essential metadata and control information that governs the accurate handling, routing, authentication, and processing of the AS2 message between trading partners. These headers must strictly adhere to standard HTTP header formats and encoding rules, encompassing mandatory fields such as Content-Type, Message-ID, AS2-From, AS2-To, and other protocol-specific headers critical for AS2 communication. Proper configuration and validation of these headers are vital to maintaining message integrity, security (including encryption and digital signatures), and ensuring successful interoperability within the AS2 ecosystem. This collection supports both standard and custom headers as required by specific trading partner agreements or business needs, with header names treated as case-insensitive per HTTP standards.  \n**Field behavior:**  \n- Represents key-value pairs of HTTP headers transmitted alongside the AS2 message.  \n- Directly influences message routing, identification, authentication, security, and processing by the receiving system.  \n- Supports inclusion of both mandatory AS2 headers and additional custom headers as dictated by trading partner agreements or specific business requirements.  \n- Header names are case-insensitive, consistent with HTTP/1.1 specifications.  \n- Facilitates communication of protocol-specific instructions, security parameters (e.g., MIC, Disposition-Notification-To), and message metadata essential for AS2 transactions.  \n- Enables conveying information necessary for message encryption, signing, and receipt acknowledgments.  \n**Implementation guidance:**  \n- Ensure all header names and values conform strictly to HTTP/1.1 header syntax, character encoding, and escaping rules.  \n- Include all mandatory AS2 headers such as AS2-From, AS2-To, Message-ID, Content-Type, and security-related headers like MIC and Disposition-Notification-To.  \n- Avoid duplicate headers unless explicitly allowed by the AS2 specification or trading partner agreements.  \n- Validate header values rigorously for correctness, format compliance, and security to prevent injection attacks or protocol violations.  \n- Consider the impact of headers on message security features including encryption, digital signatures, and receipt acknowledgments.  \n- Maintain consistency with trading partner agreements regarding required, optional, and custom headers to ensure seamless interoperability.  \n- Use appropriate character encoding and escape sequences to handle special or non-ASCII characters within header values.  \n**Examples:**  \n- Content-Type: application/edi-x12  \n- AS2-From: PartnerA  \n- AS2-To: PartnerB  \n- Message-ID: <"}}},"Dynamodb":{"type":"object","description":"Configuration for Dynamodb exports","properties":{"region":{"type":"string","description":"Specifies the AWS region where the DynamoDB service is hosted and where all database operations will be executed. This property defines the precise geographical location of the DynamoDB instance, which is essential for optimizing application latency, ensuring compliance with data residency and sovereignty regulations, and aligning with organizational infrastructure strategies. Selecting the appropriate region helps minimize network latency by placing data physically closer to end-users or application servers, thereby improving performance and responsiveness. Additionally, the chosen region influences service availability, pricing structures, feature sets, and legal compliance requirements, making it a critical configuration parameter. The region value must be a valid AWS region identifier (such as \"us-east-1\" or \"eu-west-1\") and is used to construct the correct service endpoint URL for all DynamoDB interactions. Proper configuration of this property is vital for seamless integration with AWS SDKs, authentication mechanisms, and IAM policies that may be region-specific, ensuring secure and efficient access to DynamoDB resources.\n\n**Field behavior**\n- Determines the AWS data center location for all DynamoDB operations.\n- Directly impacts network latency, data residency, and compliance adherence.\n- Must be specified as a valid and supported AWS region code.\n- Influences the endpoint URL, authentication endpoints, and service routing.\n- Affects service availability, pricing, feature availability, and latency.\n- Remains consistent throughout the lifecycle of the application unless explicitly changed.\n\n**Implementation guidance**\n- Validate the region against the current list of supported AWS regions to prevent misconfiguration and runtime errors.\n- Use the region value to dynamically construct the DynamoDB service endpoint URL and configure SDK clients accordingly.\n- Allow the region to be configurable and overrideable to support testing environments, multi-region deployments, disaster recovery, or failover scenarios.\n- Ensure consistency of the region setting across all AWS service configurations within the application to avoid cross-region conflicts and data inconsistency.\n- Consider the implications of region selection on data sovereignty laws, compliance requirements, and organizational policies.\n- Monitor AWS announcements for new regions or changes in service availability to keep configurations up to date.\n\n**Examples**\n- \"us-east-1\" (Northern Virginia, USA)\n- \"eu-west-1\" (Ireland, Europe)\n- \"ap-southeast-2\" (Sydney, Australia)\n- \"ca-central-1\" (Canada Central)\n- \"sa-east-1\" (São Paulo, Brazil)\n\n**Important notes**\n- Selecting the correct region is crucial for optimizing application performance, cost efficiency,"},"method":{"type":"string","enum":["putItem","updateItem"],"description":"Method specifies the HTTP method to be used for the DynamoDB API request, such as GET, POST, PUT, DELETE, etc. This determines the type of operation to be performed on the DynamoDB service and how the request is processed. It defines the HTTP verb that dictates the action on the resource, typically aligning with CRUD operations (Create, Read, Update, Delete). While DynamoDB primarily uses POST requests for most operations, other methods may be applicable in limited scenarios. The chosen method directly impacts the request structure, headers, payload formatting, and response handling, ensuring that the request conforms to the expected protocol and operation semantics.\n\n**Field behavior**\n- Defines the HTTP verb for the API call, influencing the nature and intent of the operation.\n- Determines how the DynamoDB service interprets and processes the incoming request.\n- Typically corresponds to CRUD operations but is predominantly POST for DynamoDB API interactions.\n- Must be compatible with the DynamoDB API endpoint and the specific operation requirements.\n- Influences the construction of the HTTP request line, headers, and payload format.\n- Affects response handling and error processing based on the HTTP method semantics.\n\n**Implementation guidance**\n- Validate that the method value is one of the HTTP methods supported by DynamoDB, with POST being the standard and recommended method.\n- Use POST for the majority of DynamoDB operations, including queries, updates, and inserts.\n- Ensure the selected method aligns precisely with the intended DynamoDB operation to avoid protocol mismatches.\n- Handle method-specific headers, payload formats, and any protocol nuances, such as content-type and authorization headers.\n- Avoid unsupported or non-standard HTTP methods to prevent request failures or unexpected behavior.\n- Implement robust error handling for cases where the method is incompatible with the requested operation.\n\n**Examples**\n- POST (commonly used and recommended for all DynamoDB API operations)\n- GET (rarely used, potentially for specific read-only or diagnostic operations, though not standard for DynamoDB)\n- DELETE (used for deleting items or tables in DynamoDB, though typically encapsulated within POST requests)\n- PUT (generally not used in the DynamoDB API context but common in other RESTful APIs)\n\n**Important notes**\n- DynamoDB API predominantly supports POST; other HTTP methods may be unsupported or have limited applicability.\n- Using an incorrect HTTP method can cause request rejection, errors, or unintended side effects.\n- The method must strictly adhere to the DynamoDB API specification and AWS service requirements to ensure successful communication"},"tableName":{"type":"string","description":"The name of the DynamoDB table to be accessed or manipulated, serving as the primary identifier for routing API requests to the correct data store. This string must exactly match the name of an existing DynamoDB table within the specified AWS account and region, including case sensitivity. It is essential for performing operations such as reading, writing, updating, or deleting items within the table, ensuring that all data interactions target the intended resource accurately.\n\n**Field behavior**\n- Specifies the exact DynamoDB table targeted for all data operations.\n- Must be unique within the AWS account and region to prevent ambiguity.\n- Directs API calls to the appropriate table for the requested action.\n- Case-sensitive and must strictly adhere to DynamoDB naming conventions.\n- Acts as a critical routing parameter in API requests to identify the data source.\n\n**Implementation guidance**\n- Confirm the table name matches the existing DynamoDB table in AWS, respecting case sensitivity.\n- Avoid using reserved words or disallowed special characters to prevent API errors.\n- Validate the table name format programmatically before making API calls.\n- Manage table names through environment variables or configuration files to facilitate deployment across multiple environments (development, staging, production).\n- Ensure all application components referencing the table name are updated consistently if the table name changes.\n- Incorporate error handling to manage cases where the specified table does not exist or is inaccessible.\n\n**Examples**\n- \"Users\"\n- \"Orders2024\"\n- \"Inventory_Table\"\n- \"CustomerData\"\n- \"Sales-Data.2024\"\n\n**Important notes**\n- DynamoDB table names can be up to 255 characters in length.\n- Table names are case-sensitive and must be used consistently across all API calls.\n- The specified table must exist in the targeted AWS region before any operation is performed.\n- Changing the table name requires updating all dependent code and redeploying applications as necessary.\n- Incorrect table names will result in API errors such as ResourceNotFoundException.\n\n**Dependency chain**\n- Dependent on the AWS account and region context where DynamoDB is accessed.\n- Requires appropriate IAM permissions to perform operations on the specified table.\n- Works in conjunction with other DynamoDB parameters such as key schema, attribute definitions, and provisioned throughput settings.\n- Relies on network connectivity and AWS service availability for successful API interactions.\n\n**Technical details**\n- Represented as a string value.\n- Must conform to DynamoDB naming rules: allowed characters include alphanumeric characters, underscore (_), hy"},"partitionKey":{"type":"string","description":"partitionKey specifies the attribute name used as the primary partition key in a DynamoDB table. This key is essential for uniquely identifying each item within the table and determines the partition where the data is physically stored. It plays a fundamental role in data distribution, scalability, and query efficiency by enabling DynamoDB to hash the key value and allocate items across multiple storage partitions. The partition key must be present in every item and is required for all operations involving item creation, retrieval, update, and deletion. Selecting an appropriate partition key with high cardinality ensures even data distribution, prevents performance bottlenecks, and supports efficient query patterns. When combined with an optional sort key, it forms a composite primary key that enables more complex data models and range queries.\n\n**Field behavior**\n- Defines the primary attribute used to partition and organize data in a DynamoDB table.\n- Must have unique values within the context of the partition to ensure item uniqueness.\n- Drives the distribution of data across partitions to optimize performance and scalability.\n- Required for all item-level operations such as put, get, update, and delete.\n- Works in tandem with an optional sort key to form a composite primary key for more complex data models.\n\n**Implementation guidance**\n- Select an attribute with a high cardinality (many distinct values) to promote even data distribution and avoid hot partitions.\n- Ensure the partition key attribute is included in every item stored in the table.\n- Analyze application access patterns to choose a partition key that supports efficient queries and minimizes read/write contention.\n- Use a composite key (partition key + sort key) when you need to model one-to-many relationships or perform range queries.\n- Avoid using attributes with low variability or skewed distributions as partition keys to prevent performance bottlenecks.\n\n**Examples**\n- \"userId\" for applications where data is organized by individual users.\n- \"orderId\" in an e-commerce system to uniquely identify each order.\n- \"deviceId\" for storing telemetry data from IoT devices.\n- \"customerEmail\" when email addresses serve as unique identifiers.\n- \"regionCode\" combined with a sort key for geographic data partitioning.\n\n**Important notes**\n- The partition key is mandatory and immutable after table creation; changing it requires creating a new table.\n- The attribute type must be one of DynamoDB’s supported scalar types: String, Number, or Binary.\n- Partition key values are hashed internally by DynamoDB to determine the physical partition.\n- Using a poorly chosen partition key"},"sortKey":{"type":"string","description":"The attribute name designated as the sort key (also known as the range key) in a DynamoDB table. This key is essential for ordering and organizing items that share the same partition key, enabling efficient sorting, range queries, and precise retrieval of data within a partition. Together with the partition key, the sort key forms the composite primary key that uniquely identifies each item in the table. It plays a critical role in query operations by allowing filtering, sorting, and conditional retrieval based on its values, which can be of type String, Number, or Binary. Proper definition and usage of the sort key significantly enhance data access patterns, query performance, and cost efficiency in DynamoDB.\n\n**Field behavior**\n- Defines the attribute DynamoDB uses to sort and organize items within the same partition key.\n- Enables efficient range queries, ordered retrieval, and filtering of items.\n- Must be unique in combination with the partition key for each item to ensure item uniqueness.\n- Supports composite key structures when combined with the partition key.\n- Influences the physical storage order of items within a partition, optimizing query performance.\n\n**Implementation guidance**\n- Specify the attribute name exactly as defined in the DynamoDB table schema.\n- Ensure the attribute type matches the table definition (String, Number, or Binary).\n- Use this key to perform queries requiring sorting, range-based filtering, or conditional retrieval.\n- Omit or set to null if the DynamoDB table does not define a sort key.\n- Validate that the sort key attribute exists and is properly indexed in the table schema before use.\n- Consider the sort key design carefully to support intended query patterns and access efficiency.\n\n**Examples**\n- \"timestamp\"\n- \"createdAt\"\n- \"orderId\"\n- \"category#date\"\n- \"eventDate\"\n- \"userScore\"\n\n**Important notes**\n- The sort key is optional; some DynamoDB tables only have a partition key without a sort key.\n- Queries specifying both partition key and sort key conditions are more efficient and cost-effective.\n- The sort key attribute must be pre-defined in the table schema and cannot be added dynamically.\n- The combination of partition key and sort key must be unique for each item.\n- Sort key values influence the physical storage order of items within a partition, impacting query speed.\n- Using a well-designed sort key can reduce the need for additional indexes and improve scalability.\n\n**Dependency chain**\n- Requires a DynamoDB table configured with a composite primary key (partition key"},"itemDocument":{"type":"string","description":"The `itemDocument` property represents the complete and authoritative data record stored within a DynamoDB table. It is structured as a JSON object that comprehensively includes all attribute names and their corresponding values, fully reflecting the schema and data model of the DynamoDB item. This property encapsulates every attribute of the item, including mandatory primary keys, optional sort keys (if applicable), and any additional data fields, supporting complex nested objects, maps, lists, and arrays to accommodate DynamoDB’s rich data types. It serves as the primary representation of an item for all item-level operations such as reading (GetItem), writing (PutItem), updating (UpdateItem), and deleting (DeleteItem) within DynamoDB, ensuring data integrity and consistency throughout these processes.\n\n**Field behavior**\n- Contains the full and exact representation of a DynamoDB item as a structured JSON document.\n- Includes all required attributes such as primary keys and sort keys, as well as any optional or additional fields.\n- Supports complex nested data structures including maps, lists, and sets, consistent with DynamoDB’s flexible data model.\n- Acts as the main payload for CRUD (Create, Read, Update, Delete) operations on DynamoDB items.\n- Reflects either the current stored state or the intended new state of the item depending on the operation context.\n- Can represent partial documents when used with projection expressions or update operations that modify subsets of attributes.\n\n**Implementation guidance**\n- Ensure strict adherence to DynamoDB’s supported data types (String, Number, Binary, Boolean, Null, List, Map, Set) and attribute naming conventions.\n- Validate presence and correct formatting of primary key attributes to uniquely identify the item for key-based operations.\n- For update operations, provide either the entire item document or only the attributes to be modified, depending on the API method and update strategy.\n- Maintain consistent attribute names and data types across operations to preserve schema integrity and prevent runtime errors.\n- Utilize attribute projections or partial documents when full item data is unnecessary to optimize performance, reduce latency, and minimize cost.\n- Handle nested structures carefully to ensure proper serialization and deserialization between application code and DynamoDB.\n\n**Examples**\n- `{ \"UserId\": \"12345\", \"Name\": \"John Doe\", \"Age\": 30, \"Preferences\": { \"Language\": \"en\", \"Notifications\": true } }`\n- `{ \"OrderId\": \"A100\", \"Date\": \"2024-06-01\", \"Items\":"},"updateExpression":{"type":"string","description":"A string that defines the specific attributes to be updated, added, or removed within a DynamoDB item using DynamoDB's UpdateExpression syntax. This expression provides precise control over how item attributes are modified by including one or more of the following clauses: SET (to assign or overwrite attribute values), REMOVE (to delete attributes), ADD (to increment numeric attributes or add elements to sets), and DELETE (to remove elements from sets). The updateExpression enables atomic, conditional, and efficient updates without replacing the entire item, ensuring data integrity and minimizing write costs. It must be carefully constructed using placeholders for attribute names and values—ExpressionAttributeNames and ExpressionAttributeValues—to avoid conflicts with reserved keywords and prevent injection vulnerabilities. This expression is essential for performing complex update operations in a single request, supporting both simple attribute changes and advanced manipulations like incrementing counters or modifying sets.\n\n**Field behavior**\n- Specifies one or more atomic update operations on a DynamoDB item's attributes.\n- Supports combining multiple update actions (SET, REMOVE, ADD, DELETE) within a single expression, separated by spaces.\n- Requires strict adherence to DynamoDB's UpdateExpression syntax and semantics.\n- Works in conjunction with ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values.\n- Only affects the attributes explicitly mentioned; all other attributes remain unchanged.\n- Enables conditional updates when combined with ConditionExpression for safe concurrent modifications.\n\n**Implementation guidance**\n- Construct the updateExpression using valid DynamoDB syntax, incorporating clauses such as SET, REMOVE, ADD, and DELETE as needed.\n- Always use placeholders (e.g., '#attrName' for attribute names and ':value' for attribute values) to avoid conflicts with reserved words and enhance security.\n- Validate the expression syntax and ensure all referenced placeholders are defined before executing the update operation.\n- Combine multiple update actions by separating them with spaces within the same expression string.\n- Test expressions thoroughly to prevent runtime errors caused by syntax issues or missing placeholders.\n- Use SET for assigning or overwriting attribute values, REMOVE for deleting attributes, ADD for incrementing numbers or adding set elements, and DELETE for removing set elements.\n- Avoid using reserved keywords directly; always substitute them with ExpressionAttributeNames placeholders.\n\n**Examples**\n- \"SET #name = :newName, #age = :newAge REMOVE #obsoleteAttribute\"\n- \"ADD #score :incrementValue DELETE #tags :tagToRemove\"\n- \"SET #status = :statusValue\"\n- \"REMOVE #temporaryField"},"conditionExpression":{"type":"string","description":"A string that defines the conditional logic to be applied to a DynamoDB operation, specifying the precise criteria that must be met for the operation to proceed. This expression leverages DynamoDB's condition expression syntax to evaluate attributes of the targeted item, enabling fine-grained control over operations such as PutItem, UpdateItem, DeleteItem, and transactional writes. It supports a comprehensive set of logical operators (AND, OR, NOT), comparison operators (=, <, >, BETWEEN, IN), functions (attribute_exists, attribute_not_exists, begins_with, contains, size), and attribute path notations to construct complex, nested conditions. This ensures data integrity by preventing unintended modifications and enforcing business rules at the database level.\n\n**Field behavior**\n- Determines whether the DynamoDB operation executes based on the evaluation of the condition against the item's attributes.\n- Evaluated atomically before performing the operation; if the condition evaluates to false, the operation is aborted and no changes are made.\n- Supports combining multiple conditions using logical operators for complex, multi-attribute criteria.\n- Utilizes built-in functions to inspect attribute existence, value patterns, and sizes, enabling sophisticated conditional checks.\n- Applies exclusively to the specific item targeted by the operation, including nested attributes accessed via attribute path notation.\n- Influences transactional operations by ensuring all conditions in a transaction are met before committing.\n\n**Implementation guidance**\n- Always use ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values, avoiding conflicts with reserved keywords and improving readability.\n- Validate the syntax of the condition expression prior to request submission to prevent runtime errors and failed operations.\n- Carefully construct expressions to handle edge cases, such as attributes that may be missing or have null values.\n- Combine multiple conditions logically to enforce precise constraints and business logic on the operation.\n- Test condition expressions thoroughly across diverse data scenarios to ensure expected behavior and robustness.\n- Keep expressions as simple and efficient as possible to optimize performance and reduce request complexity.\n\n**Examples**\n- \"attribute_exists(#name) AND #age >= :minAge\"\n- \"#status = :activeStatus\"\n- \"attribute_not_exists(#id)\"\n- \"#score BETWEEN :low AND :high\"\n- \"begins_with(#title, :prefix)\"\n- \"contains(#tags, :tagValue) OR size(#comments) > :minComments\"\n\n**Important notes**\n- The conditionExpression is optional but highly recommended to safeguard against unintended overwrites, deletions, or updates.\n- If the condition"},"expressionAttributeNames":{"type":"string","description":"expressionAttributeNames is a map of substitution tokens used to safely reference attribute names within DynamoDB expressions. This property allows you to define placeholder tokens for attribute names that might otherwise cause conflicts due to being reserved words, containing special characters, or having names that are dynamically generated. By using these placeholders, you can avoid syntax errors and ambiguities in expressions such as ProjectionExpression, FilterExpression, UpdateExpression, and ConditionExpression, ensuring your queries and updates are both valid and maintainable.\n\n**Field behavior**\n- Functions as a dictionary where each key is a placeholder token starting with a '#' character, and each value is the actual attribute name it represents.\n- Enables the use of reserved words, special characters, or otherwise problematic attribute names within DynamoDB expressions.\n- Supports complex expressions by allowing multiple attribute name substitutions within a single expression.\n- Prevents syntax errors and improves clarity by explicitly mapping placeholders to attribute names.\n- Applies only within the scope of the expression where it is defined, ensuring localized substitution without affecting other parts of the request.\n\n**Implementation guidance**\n- Always prefix keys with a '#' to clearly indicate they are substitution tokens.\n- Ensure each placeholder token is unique within the scope of the expression to avoid conflicts.\n- Use expressionAttributeNames whenever attribute names are reserved words, contain special characters, or are dynamically generated.\n- Combine with expressionAttributeValues when expressions require both attribute name and value substitutions.\n- Verify that the attribute names used as values exactly match those defined in your DynamoDB table schema to prevent runtime errors.\n- Keep the mapping concise and relevant to the attributes used in the expression to maintain readability.\n- Avoid overusing substitution tokens for attribute names that do not require it, to keep expressions straightforward.\n\n**Examples**\n- `{\"#N\": \"Name\", \"#A\": \"Age\"}` to substitute attribute names \"Name\" and \"Age\" in an expression.\n- `{\"#status\": \"Status\"}` when \"Status\" is a reserved word or needs escaping in an expression.\n- `{\"#yr\": \"Year\", \"#mn\": \"Month\"}` for expressions involving multiple date-related attributes.\n- `{\"#addr\": \"Address\", \"#zip\": \"ZipCode\"}` to handle attribute names with special characters or spaces.\n- `{\"#P\": \"Price\", \"#Q\": \"Quantity\"}` used in an UpdateExpression to increment numeric attributes.\n\n**Important notes**\n- Keys must always begin with the '#' character; otherwise, the substitution"},"expressionAttributeValues":{"type":"string","description":"expressionAttributeValues is a map of substitution tokens for attribute values used within DynamoDB expressions. These tokens serve as placeholders that safely inject values into expressions such as ConditionExpression, FilterExpression, UpdateExpression, and KeyConditionExpression. By separating data from code, this mechanism prevents injection attacks and syntax errors, ensuring secure and reliable query and update operations. Each key in this map is a placeholder string that begins with a colon (\":\"), and each corresponding value is a DynamoDB attribute value object representing the actual data to be used in the expression. This design enforces that user input is never directly embedded into expressions, enhancing both security and maintainability.\n\n**Field behavior**\n- Contains key-value pairs where keys are placeholders prefixed with a colon (e.g., \":val1\") used within DynamoDB expressions.\n- Each key maps to a DynamoDB attribute value object specifying the data type and value, supporting all DynamoDB data types including strings, numbers, binaries, lists, maps, sets, and booleans.\n- Enables safe and dynamic substitution of attribute values in expressions, avoiding direct string concatenation and reducing the risk of injection vulnerabilities.\n- Mandatory when expressions reference attribute values to ensure correct parsing and execution of queries or updates.\n- Placeholders are case-sensitive and must exactly match those used in the corresponding expressions.\n- Supports complex data structures, allowing nested maps and lists as attribute values for advanced querying scenarios.\n\n**Implementation guidance**\n- Always prefix placeholder keys with a colon (\":\") to clearly identify them as substitution tokens within expressions.\n- Ensure every placeholder referenced in an expression has a corresponding entry in expressionAttributeValues to avoid runtime errors.\n- Format each value according to DynamoDB’s attribute value specification, such as { \"S\": \"string\" }, { \"N\": \"123\" }, { \"BOOL\": true }, or complex nested structures.\n- Avoid using reserved words, spaces, or invalid characters in placeholder names to prevent syntax errors and conflicts.\n- Validate that the data types of values align with the expected attribute types defined in the table schema to maintain data integrity.\n- When multiple expressions are used simultaneously, maintain unique and consistent placeholder names to prevent collisions and ambiguity.\n- Use expressionAttributeValues in conjunction with expressionAttributeNames when attribute names are reserved words or contain special characters to ensure proper expression parsing.\n- Consider the size limitations of expressionAttributeValues to avoid exceeding DynamoDB’s request size limits.\n\n**Examples**\n- { \":val1\": { \"S\": \""},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from DynamoDB data should be ignored or skipped during the operation, enabling conditional control over data retrieval to optimize performance, reduce latency, or avoid redundant processing in data workflows. This flag allows systems to bypass the extraction step when data is already available, cached, or deemed unnecessary for the current operation, thereby improving efficiency and resource utilization.  \n**Field** BEHAVIOR:  \n- When set to true, the extraction of data from DynamoDB is completely bypassed, preventing any data retrieval from the source.  \n- When set to false or omitted, the extraction process proceeds as normal, retrieving data for further processing.  \n- Primarily used to optimize workflows by skipping unnecessary extraction steps when data is already available or not needed.  \n- Influences the flow of data processing by controlling whether DynamoDB data is fetched or not.  \n- Affects downstream processing by potentially providing no new data input if extraction is skipped.  \n**Implementation guidance:**  \n- Accepts a boolean value: true to skip extraction, false to perform extraction.  \n- Ensure that downstream components can handle scenarios where no data is extracted due to this flag being true.  \n- Validate input to avoid unintended skipping of extraction that could cause data inconsistencies or stale results.  \n- Use this flag judiciously, especially in complex pipelines where data dependencies exist, to prevent breaking data integrity.  \n- Incorporate appropriate logging or monitoring to track when extraction is skipped for auditability and debugging.  \n**Examples:**  \n- ignoreExtract: true — extraction from DynamoDB is skipped, useful when data is cached or pre-fetched.  \n- ignoreExtract: false — extraction occurs normally, retrieving fresh data from DynamoDB.  \n- omit the property entirely — defaults to false, extraction proceeds as usual.  \n**Important notes:**  \n- Skipping extraction may result in incomplete, outdated, or stale data if subsequent operations rely on fresh DynamoDB data.  \n- This option should only be enabled when you are certain that extraction is unnecessary or redundant to avoid data inconsistencies.  \n- Consider the impact on data integrity, consistency, and downstream processing before enabling this flag.  \n- Misuse can lead to errors, unexpected behavior, or data quality issues in data workflows.  \n- Ensure that any caching or alternative data sources used in place of extraction are reliable and up-to-date.  \n**Dependency chain:**  \n- Requires a valid DynamoDB data source configuration to be relevant.  \n-"}}},"Http":{"type":"object","description":"Configuration for HTTP imports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the import to function properly. This is a required configuration\nfor all HTTP based imports, as determined by the connection associated with the import.\n","properties":{"formType":{"type":"string","enum":["assistant","http","rest","graph_ql","assistant_graphql"],"description":"The form type to use for the import.\n- **assistant**: The import is for a system that we have a assistant form for.  This is the default value if there is a connector available for the system.\n- **http**: The import is an HTTP import.\n- **rest**: This value is only used for legacy imports that are not supported by the new HTTP framework.\n- **graph_ql**: The import is a GraphQL import.\n- **assistant_graphql**: The import is for a system that we have a assistant form for and utilizes GraphQL.  This is the default value if there is a connector available for the system and the connector supports GraphQL.\n"},"type":{"type":"string","enum":["file","records"],"description":"Specifies the type of data being imported via HTTP.\n\n- **file**: The import handles raw file content (binary data, PDF, images, etc.)\n- **records**: The import handles structured record data (JSON objects, XML records, etc.) - most common\n\nMost HTTP imports use \"records\" type for standard REST API data imports.\nOnly use \"file\" type when importing raw file content that will be processed downstream.\n"},"requestMediaType":{"type":"string","enum":["xml","json","csv","urlencoded","form-data","octet-stream","plaintext"],"description":"Specifies the media type (content type) of the request body sent to the target API.\n\n- **json**: For JSON request bodies (Content-Type: application/json) - most common\n- **xml**: For XML request bodies (Content-Type: application/xml)\n- **csv**: For CSV data (Content-Type: text/csv)\n- **urlencoded**: For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- **form-data**: For multipart form data\n- **octet-stream**: For binary data\n- **plaintext**: For plain text content\n"},"_httpConnectorEndpointIds":{"type":"array","items":{"type":"string","format":"objectId"},"description":"Array of HTTP connector endpoint IDs to use for this import.\nMultiple endpoints can be specified for different request types or operations.\n"},"blobFormat":{"type":"string","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"],"description":"Character encoding format for blob/binary data imports.\nOnly relevant when type is \"file\" or when handling binary content.\n"},"batchSize":{"type":"integer","description":"Maximum number of records to submit per HTTP request.\n\n- REST services typically use batchSize=1 (one record per request)\n- REST batch endpoints or RPC/XML services can handle multiple records\n- Affects throughput and API rate limiting\n\nDefault varies by API - consult target API documentation for optimal value.\n"},"successMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in successful responses.\n\n- **json**: For JSON responses (most common)\n- **xml**: For XML responses\n- **plaintext**: For plain text responses\n"},"requestType":{"type":"array","items":{"type":"string","enum":["CREATE","UPDATE"]},"description":"Specifies the type of operation(s) this import performs.\n\n- **CREATE**: Creating new records\n- **UPDATE**: Updating existing records\n\nFor composite/upsert imports, specify both operations. The array is positionally\naligned with relativeURI and method arrays — each index maps to one operation.\nThe existingExtract field determines which operation is used at runtime.\n\nExample composite upsert:\n```json\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"method\": [\"PUT\", \"POST\"],\n\"relativeURI\": [\"/customers/{{{data.0.customerId}}}.json\", \"/customers.json\"],\n\"existingExtract\": \"customerId\"\n```\n"},"errorMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in error responses.\n\n- **json**: For JSON error responses (most common)\n- **xml**: For XML error responses\n- **plaintext**: For plain text error messages\n"},"relativeURI":{"type":"array","items":{"type":"string"},"description":"**CRITICAL: This MUST be an array of strings, NOT an object.**\n\n**Handlebars data context:** At runtime, `relativeURI` templates always render against the\n**pre-mapped** record — the original input record that arrives at this import step, before\nthe Import's mapping step is applied. This is intentional: URI construction usually needs\nbusiness identifiers that mappings may rename or remove.\n\nArray of relative URI path(s) for the import endpoint.\nCan include handlebars expressions like `{{field}}` for dynamic values.\n\n**Single-operation imports (one element)**\n```json\n\"relativeURI\": [\"/api/v1/customers\"]\n```\n\n**Composite/upsert imports (multiple elements — positionally aligned)**\nFor imports with both CREATE and UPDATE in requestType, relativeURI, method,\nand requestType arrays are **positionally aligned**. Each index corresponds\nto one operation. The existingExtract field determines which index is used at\nruntime: if the existingExtract field has a value → use the UPDATE index;\nif empty/missing → use the CREATE index.\n\nExample of a composite upsert:\n```json\n\"relativeURI\": [\"/customers/{{{data.0.shopifyCustomerId}}}.json\", \"/customers.json\"],\n\"method\": [\"PUT\", \"POST\"],\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"existingExtract\": \"shopifyCustomerId\"\n```\nHere index 0 is the UPDATE operation (PUT to a specific customer) and index 1 is\nthe CREATE operation (POST to the collection endpoint).\n\n**IMPORTANT**: For composite imports, use the `{{{data.0.fieldName}}}` handlebars\nsyntax (triple braces with data.0 prefix) to reference the existing record ID in\nthe UPDATE URI. Do NOT use `{{#if}}` conditionals in a single URI — use separate\narray elements instead.\n\n**Wrong format (DO not do THIS)**\n```json\n\"relativeURI\": {\"/api/v1/customers\": \"CREATE\"}\n```\n```json\n\"relativeURI\": [\"{{#if id}}/customers/{{id}}.json{{else}}/customers.json{{/if}}\"]\n```\n"},"method":{"type":"array","items":{"type":"string","enum":["GET","PUT","POST","PATCH","DELETE"]},"description":"HTTP method(s) used for import requests.\n\n- **POST**: Most common for creating new records\n- **PUT**: For full updates/replacements\n- **PATCH**: For partial updates\n- **DELETE**: For deletions\n- **GET**: Rarely used for imports\n\nFor composite/upsert imports, specify multiple methods positionally aligned\nwith relativeURI and requestType arrays. Each index maps to one operation.\n\nExample: `[\"PUT\", \"POST\"]` with `requestType: [\"UPDATE\", \"CREATE\"]` means\nindex 0 uses PUT for updates, index 1 uses POST for creates.\n"},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector endpoint being used (single endpoint)."},"body":{"type":"array","items":{"type":"object"},"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP imports.\n\nThis is an OPTIONAL field that should only be set in very specific, rare cases. For standard REST API imports\nthat send JSON data, this field MUST be left undefined because data transformation is handled through the\nImport's mapping step, not through this body template.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data imports, including:\n- REST API imports that send JSON records to create/update resources\n- APIs that accept JSON request bodies (the majority of modern APIs)\n- Any import where you want to use Import mappings to transform data\n- Standard CRUD operations (Create, Update, Delete) via REST APIs\n- GraphQL mutations that accept JSON input\n- Any import where the Import mapping step will handle data transformation\n\nExamples of imports that should have this field undefined:\n- \"Import customer data into Shopify\" → undefined (mappings handle the data transformation)\n- \"Create orders via custom REST API\" → undefined (mappings handle the data transformation)\n\n**KEY CONCEPT:** Imports have a dedicated \"Mapping\" step that transforms source data into the target\nformat. This is the standard, preferred approach. When the body field is set, the mapping step still runs\nfirst — the body template then renders against the **post-mapped** record instead of the destination API\nreceiving the mapped record as-is. This lets you post-process the mapped output into XML, SOAP envelopes,\nor other non-JSON shapes, but forces you to hand-template the entire request and is harder to maintain and debug.\n\n**When to set this field (RARE use CASE)**\n\nSet this field ONLY when:\n1. The target API requires XML or SOAP format (not JSON)\n2. You need a highly customized request structure that cannot be achieved through standard mappings\n3. You're working with a legacy API that requires very specific formatting\n4. The API documentation explicitly shows a complex XML/SOAP envelope structure\n5. When the user explicitly specifies a request body\n\nExamples of when to set body:\n- \"Import to SOAP web service\" → set body with XML/SOAP template\n- \"Send data to XML-only legacy API\" → set body with XML template\n- \"Complex SOAP envelope with multiple nested elements\" → set body with SOAP template\n- \"Create an import to /test with the body as {'key': 'value', 'key2': 'value2', ...}\"\n\n**Implementation details**\n\nWhen this field is undefined (default for most imports):\n- The system uses the Import's mapping step to transform data\n- Mappings provide a visual, intuitive way to map source fields to target fields\n- The mapped data is automatically sent as the request body\n- Easier to debug and maintain\n\nWhen this field is set:\n- The mapping step still runs — it produces a post-mapped record that is then fed into this body template\n- The template renders against the **post-mapped** record (use `{{record.<mappedField>}}` to reference mapping outputs)\n- You are responsible for shaping the final request body (XML/SOAP/custom JSON) by wrapping / restructuring the mapped data\n- Harder to maintain and debug than relying on mappings alone\n\n**Decision flowchart**\n\n1. Does the target API accept JSON request bodies?\n   → YES: Leave this field undefined (use mappings instead)\n2. Are you comfortable using the Import's mapping step?\n   → YES: Leave this field undefined\n3. Does the API require XML or SOAP format?\n   → YES: Set this field with appropriate XML/SOAP template\n4. Does the API require a highly unusual request structure that mappings can't handle?\n   → YES: Consider setting this field (but try mappings first)\n\nRemember: When in doubt, leave this field undefined. Almost all modern REST APIs work perfectly\nwith the standard mapping approach, and this is much easier to use and maintain.\n"},"existingExtract":{"type":"string","description":"Field name or JSON path used to determine if a record already exists in the destination,\nfor composite (upsert) imports that have both CREATE and UPDATE request types.\n\nWhen the import has requestType: [\"UPDATE\", \"CREATE\"] (or [\"CREATE\", \"UPDATE\"]):\n- If the field specified by existingExtract has a value in the incoming record,\n  the UPDATE operation (PUT) is used with the corresponding relativeURI and method.\n- If the field is empty or missing, the CREATE operation (POST) is used.\n\nThis field drives the upsert decision and must match a field that is populated by\nan upstream lookup or response mapping (e.g., a destination system ID like \"shopifyCustomerId\").\n\nIMPORTANT: Only set this field for composite/upsert imports that have BOTH CREATE and UPDATE\nin requestType. Do not set for single-operation imports.\n"},"ignoreExtract":{"type":"string","description":"JSON path to the field in the record that is used to determine if the record already exists.\nOnly used when ignoreMissing or ignoreExisting is set.\n"},"endPointBodyLimit":{"type":"integer","description":"Maximum size limit for the request body in bytes.\nUsed to enforce API-specific size constraints.\n"},"headers":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Custom HTTP headers to include with import requests.\n\nEach header has a name and value, which can include handlebars expressions.\n\nAt runtime, header value templates render against the **pre-mapped**\nrecord (the original input record, before the Import's mapping step).\nUse `{{record.<field>}}` to reference fields as they appear in the\nupstream source — mappings don't rename or drop fields for header\nevaluation.\n"},"response":{"type":"object","properties":{"resourcePath":{"type":"array","items":{"type":"string"},"description":"JSON path to the resource collection in the response.\nRequired when batchSize > 1.\n"},"resourceIdPath":{"type":"array","items":{"type":"string"},"description":"JSON path to the ID field in each resource.\nIf not specified, system looks for \"id\" or \"_id\" fields.\n"},"successPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates success.\nUsed for APIs that don't rely solely on HTTP status codes.\n"},"successValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at successPath that indicate success.\n"},"failPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates failure (even if status=200).\n"},"failValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at failPath that indicate failure.\n"},"errorPath":{"type":"string","description":"JSON path to error message in the response.\n"},"allowArrayforSuccessPath":{"type":"boolean","description":"Whether to allow array values for successPath evaluation.\n"},"hasHeader":{"type":"boolean","description":"Indicates if the first record in the response is a header row (for CSV responses).\n"}},"description":"Configuration for parsing and interpreting HTTP responses.\n"},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource for handling asynchronous import operations.\n\nUsed when the target API uses a \"fire-and-check-back\" pattern (HTTP 202, polling, etc.).\n"},"ignoreLookupName":{"type":"string","description":"Name of the lookup to use for checking resource existence.\nOnly used when ignoreMissing or ignoreExisting is set.\n"}}},"Ftp":{"type":"object","description":"Configuration for Ftp exports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Identifier for the third-party connector utilized in the FTP integration, serving as a unique and immutable reference that links the FTP configuration to an external connector service. This identifier is essential for accurately associating the FTP connection with the intended third-party system, enabling seamless and secure data transfer, synchronization, and management through the specified connector. It ensures that all FTP operations are correctly routed and managed via the external service, supporting reliable integration workflows and maintaining system integrity throughout the connection lifecycle.\n\n**Field behavior**\n- Uniquely identifies the third-party connector associated with the FTP configuration.\n- Acts as a key reference to bind FTP settings to a specific external connector service.\n- Required when creating, updating, or managing FTP connections via third-party integrations.\n- Typically immutable after initial assignment to preserve connection consistency.\n- Used by system components to route FTP operations through the correct external connector.\n\n**Implementation guidance**\n- Should be a string or numeric identifier that aligns with the third-party connector’s identification scheme.\n- Must be validated against the system’s registry of available connectors to ensure validity.\n- Should be set securely and protected from unauthorized modification.\n- Updates to this identifier should be handled cautiously, with appropriate validation and impact assessment.\n- Ensure the identifier complies with any formatting or naming conventions imposed by the third-party service.\n- Transmit and store this identifier securely, using encryption where applicable.\n\n**Examples**\n- \"connector_12345\"\n- \"tpConnector-67890\"\n- \"ftpThirdPartyConnector01\"\n- \"extConnector_abcde\"\n- \"thirdPartyFTP_001\"\n\n**Important notes**\n- This field is critical for correctly mapping FTP configurations to their corresponding third-party services.\n- An incorrect or missing _tpConnectorId can lead to connection failures, data misrouting, or integration errors.\n- Synchronization between this identifier and the third-party connector records must be maintained to avoid inconsistencies.\n- Changes to this field may require re-authentication or reconfiguration of the FTP integration.\n- Proper error handling should be implemented to manage cases where the identifier does not match any registered connector.\n\n**Dependency chain**\n- Depends on the existence and registration of third-party connectors within the system.\n- Referenced by FTP connection management modules, authentication services, and integration workflows.\n- Interacts with authorization mechanisms to ensure secure access to third-party services.\n- May influence logging, monitoring, and auditing processes related to FTP integrations.\n\n**Technical details**\n- Data type: string (or"},"directoryPath":{"type":"string","description":"The path to the directory on the FTP server where files will be accessed, stored, or managed. This path can be specified either as an absolute path starting from the root directory of the FTP server or as a relative path based on the user's initial directory upon login, depending on the server's configuration and permissions. It is essential that the path adheres to the FTP server's directory structure, naming conventions, and access controls to ensure successful file operations such as uploading, downloading, listing, or deleting files. The directoryPath serves as the primary reference point for all file-related commands during an FTP session, determining the scope and context of file management activities.\n\n**Field behavior**\n- Defines the target directory location on the FTP server for all file-related operations.\n- Supports both absolute and relative path formats, interpreted according to the FTP server’s setup.\n- Must comply with the server’s directory hierarchy, naming rules, and access permissions.\n- Used by the FTP client to navigate and perform operations within the specified directory.\n- Influences the scope of accessible files and subdirectories during FTP sessions.\n- Changes to this path affect subsequent file operations until updated or reset.\n\n**Implementation guidance**\n- Validate the directory path format to ensure compatibility with the FTP server, typically using forward slashes (`/`) as separators.\n- Normalize the path to remove redundant elements such as `.` (current directory) and `..` (parent directory) to prevent errors and security vulnerabilities.\n- Handle scenarios where the specified directory does not exist by either creating it if permissions allow or returning a clear, descriptive error message.\n- Properly encode special characters and escape sequences in the path to conform with FTP protocol requirements and server expectations.\n- Provide informative and user-friendly error messages when the path is invalid, inaccessible, or lacks sufficient permissions.\n- Sanitize input to prevent directory traversal attacks or unauthorized access to restricted areas.\n- Consider server-specific behaviors such as case sensitivity, symbolic links, and virtual directories when interpreting the path.\n- Ensure that path changes are atomic and consistent to avoid race conditions during concurrent FTP operations.\n\n**Examples**\n- `/uploads/images`\n- `documents/reports/2024`\n- `/home/user/data`\n- `./backup`\n- `../shared/resources`\n- `/var/www/html`\n- `projects/current`\n\n**Important notes**\n- Access to the specified directory depends on the credentials used during FTP authentication.\n- Directory permissions directly affect the ability to read, write, or list files within the"},"fileName":{"type":"string","description":"The name of the file to be accessed, transferred, or manipulated via FTP (File Transfer Protocol). This property specifies the exact filename, including its extension, uniquely identifying the file within the FTP directory. It is critical for accurately targeting the file during operations such as upload, download, rename, or delete. The filename must be precise and comply with the naming conventions and restrictions of the FTP server’s underlying file system to avoid errors. Case sensitivity depends on the server’s operating system, and special attention should be given to character encoding, especially when handling non-ASCII or international characters. The filename should not include any directory path separators, as these are managed separately in directory or path properties.\n\n**Field behavior**\n- Identifies the specific file within the FTP server directory for various file operations.\n- Must include the file extension to ensure precise identification.\n- Case sensitivity is determined by the FTP server’s operating system.\n- Excludes directory or path information; only the filename itself is specified.\n- Used in conjunction with directory or path properties to locate the full file path.\n\n**Implementation guidance**\n- Validate that the filename contains only characters allowed by the FTP server’s file system.\n- Ensure the filename is a non-empty, well-formed string without path separators.\n- Normalize case if required to match server-specific case sensitivity rules.\n- Properly handle encoding for special or international characters to prevent transfer errors.\n- Combine with directory or path properties to construct the complete file location.\n- Avoid embedding directory paths or separators within the filename property.\n\n**Default behavior**\n- When the user does not specify a file naming convention, default to timestamped filenames using {{timestamp}} (e.g., \"items-{{timestamp}}.csv\"). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Examples**\n- \"report.pdf\"\n- \"image_2024.png\"\n- \"backup_2023_12_31.zip\"\n- \"data.csv\"\n- \"items-{{timestamp}}.csv\"\n\n**Important notes**\n- The filename alone may not be sufficient to locate the file without the corresponding directory or path.\n- Different FTP servers and operating systems may have varying rules regarding case sensitivity.\n- Path separators such as \"/\" or \"\\\" must not be included in the filename.\n- Adherence to FTP server naming conventions and restrictions is essential to prevent errors.\n- Be mindful of potential filename length limits imposed by the FTP server or protocol.\n\n**Dependency chain**\n- Depends on FTP directory or path properties to specify the full file location.\n- Utilized alongside FTP server credentials and connection settings.\n- May be influenced by file transfer mode (binary or ASCII) based on the file type.\n\n**Technical details**\n- Data type: string\n- Must conform to the FTP server’s"},"inProgressFileName":{"type":"string","description":"inProgressFileName is the designated temporary filename assigned to a file during the FTP upload process to clearly indicate that the file transfer is currently underway and not yet complete. This naming convention serves as a safeguard to prevent other systems or processes from prematurely accessing, processing, or locking the file before the upload finishes successfully. Typically, once the upload is finalized, the file is renamed to its intended permanent filename, signaling that it is ready for further use or processing. Utilizing an in-progress filename helps maintain data integrity, avoid conflicts, and streamline workflow automation in environments where files are continuously transferred and monitored. It plays a critical role in managing file state visibility, ensuring that incomplete or partially transferred files are not mistakenly processed, which could lead to data corruption or inconsistent system behavior.\n\n**Field behavior**\n- Acts as a temporary marker for files actively being uploaded via FTP or similar protocols.\n- Prevents other processes, systems, or automated workflows from accessing or processing the file until the upload is fully complete.\n- The file is renamed atomically from the inProgressFileName to the final intended filename upon successful upload completion.\n- Helps manage concurrency, avoid file corruption, partial reads, or race conditions during file transfer.\n- Indicates the current state of the file within the transfer lifecycle, providing clear visibility into upload progress.\n- Supports cleanup mechanisms by identifying orphaned or stale temporary files resulting from interrupted uploads.\n\n**Implementation guidance**\n- Choose a unique, consistent, and easily identifiable naming pattern distinct from final filenames to clearly denote the in-progress status.\n- Commonly use suffixes or prefixes such as \".inprogress\", \".tmp\", \"_uploading\", or similar conventions that are recognized across systems.\n- Ensure the FTP server and client support atomic renaming operations to avoid partial file visibility or race conditions.\n- Verify that the chosen temporary filename does not collide with existing files in the target directory to prevent overwriting or confusion.\n- Maintain consistent naming conventions across all related systems, scripts, and automation tools for clarity and maintainability.\n- Implement robust cleanup routines to detect and remove orphaned or stale in-progress files from failed or abandoned uploads.\n- Coordinate synchronization between upload completion and renaming to prevent premature processing or file locking issues.\n\n**Examples**\n- \"datafile.csv.inprogress\"\n- \"upload.tmp\"\n- \"report_20240615.tmp\"\n- \"file_upload_inprogress\"\n- \"transaction_12345.uploading\"\n\n**Important notes**\n- This filename is strictly temporary and should never be"},"backupDirectoryPath":{"type":"string","description":"backupDirectoryPath specifies the file system path to the directory where backup files are stored during FTP operations. This directory serves as the designated location for saving copies of files before they are overwritten or deleted, ensuring that original data can be recovered if necessary. The path can be either absolute or relative, depending on the system context, and must conform to the operating system’s file path conventions. It is essential that this path is valid, accessible, and writable by the FTP service or application performing the backups to guarantee reliable backup creation and data integrity. Proper configuration of this path is critical to prevent data loss and to maintain a secure and organized backup environment.  \n**Field behavior:**  \n- Specifies the target directory for storing backup copies of files involved in FTP transactions.  \n- Activated only when backup functionality is enabled within the FTP process.  \n- Requires the path to be valid, accessible, and writable to ensure successful backup creation.  \n- Supports both absolute and relative paths, with relative paths resolved against a defined base directory.  \n- If unset or empty, backup operations may be disabled or fallback to a default directory if configured.  \n**Implementation guidance:**  \n- Validate the existence of the directory and verify write permissions before initiating backups.  \n- Normalize paths to handle trailing slashes, redundant separators, and platform-specific path formats.  \n- Provide clear and actionable error messages if the directory is invalid, inaccessible, or lacks necessary permissions.  \n- Support environment variables or placeholders for dynamic path resolution where applicable.  \n- Enforce security best practices by restricting access to the backup directory to authorized users or processes only.  \n- Ensure sufficient disk space is available in the backup directory to accommodate backup files without interruption.  \n- Implement concurrency controls such as file locking or synchronization if multiple backup operations may occur simultaneously.  \n**Examples:**  \n- \"/var/ftp/backups\"  \n- \"C:\\\\ftp\\\\backup\"  \n- \"./backup_files\"  \n- \"/home/user/ftp_backup\"  \n**Important notes:**  \n- The backup directory must have adequate storage capacity to prevent backup failures due to insufficient space.  \n- Permissions must allow the FTP process or service to create and modify files within the directory.  \n- Backup files may contain sensitive or confidential data; appropriate security measures should be enforced to protect them.  \n- Path formatting should align with the conventions of the underlying operating system to avoid errors.  \n- Failure to properly configure this path can result in loss of backup data or inability to recover"}}},"Jdbc":{"type":"object","description":"Configuration for JDBC import operations. Defines how data is written to a database\nvia a JDBC connection.\n\n**Query type determines which fields are required**\n\n| queryType        | Required fields              | Do NOT set        |\n|------------------|------------------------------|-------------------|\n| [\"per_record\"]   | query (array of SQL strings) | bulkInsert        |\n| [\"per_page\"]     | query (array of SQL strings) | bulkInsert        |\n| [\"bulk_insert\"]  | bulkInsert object            | query             |\n| [\"bulk_load\"]    | bulkLoad object              | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]\nWrong: \"INSERT INTO users ...\"\nWrong: [{\"query\": \"INSERT INTO users ...\"}]\n","properties":{"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"] or [\"per_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars {{fieldName}} syntax to inject values from incoming records.\n\n**Format — array of strings**\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n\n**Examples**\n- INSERT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{qty}} WHERE sku = '{{sku}}'\"]\n- MERGE/UPSERT: [\"MERGE INTO target USING (SELECT CAST(? AS VARCHAR) AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = ? WHEN NOT MATCHED THEN INSERT (name, email) VALUES (?, ?)\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{name}}'\n- Numbers: no quotes — {{quantity}}\n"},"queryType":{"type":"array","items":{"type":"string","enum":["bulk_insert","per_record","per_page","bulk_load"]},"description":"Execution strategy for the SQL operation. REQUIRED. Must be an array with one value.\n\n**Decision tree**\n\n1. If UPDATE or UPSERT/MERGE → [\"per_record\"] (set query field)\n2. If INSERT with \"ignore existing\" / \"skip duplicates\" / match logic → [\"per_record\"] (set query field)\n3. If pure INSERT with no duplicate checking → [\"bulk_insert\"] (set bulkInsert object)\n4. If high-volume bulk load → [\"bulk_load\"] (set bulkLoad object)\n\n**Critical relationship to other fields**\n| queryType        | REQUIRES              | DO NOT SET   |\n|------------------|-----------------------|--------------|\n| [\"per_record\"]   | query (array)         | bulkInsert   |\n| [\"per_page\"]     | query (array)         | bulkInsert   |\n| [\"bulk_insert\"]  | bulkInsert object     | query        |\n| [\"bulk_load\"]    | bulkLoad object       | query        |\n\n**Examples**\n- Per-record upsert: [\"per_record\"]\n- Bulk insert: [\"bulk_insert\"]\n- Bulk load: [\"bulk_load\"]\n"},"bulkInsert":{"type":"object","description":"Bulk insert configuration. REQUIRED when queryType is [\"bulk_insert\"]. DO NOT SET when queryType is [\"per_record\"].\n\nEnables efficient batch insertion of records into a database table without per-record SQL.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk insert. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"batchSize":{"type":"string","description":"Number of records per batch during bulk insert.\nLarger values improve throughput but use more memory.\nCommon values: \"1000\", \"5000\".\n"}}},"bulkLoad":{"type":"object","description":"Bulk load configuration. REQUIRED when queryType is [\"bulk_load\"]. Uses database-native bulk loading for maximum throughput.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk load. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"primaryKeys":{"type":["array","null"],"items":{"type":"string"},"description":"Primary key column names for upsert/merge during bulk load.\nWhen set, existing rows matching these keys are updated; non-matching rows are inserted.\nExample: [\"id\"] or [\"order_id\", \"product_id\"] for composite keys.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Lookup name, referenced in field mappings or ignore logic."},"query":{"type":"string","description":"SQL query for the lookup (e.g., \"SELECT id FROM users WHERE email = '{{email}}'\")."},"extract":{"type":"string","description":"Path in the lookup result to use as the value (e.g., \"id\")."},"_lookupCacheId":{"type":"string","format":"objectId","description":"Reference to a cached lookup for performance optimization."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup queries executed against the JDBC connection to resolve reference values.\nEach lookup runs a SQL query and extracts a value to use in field mappings.\n"}}},"Mongodb":{"type":"object","description":"Configuration for MongoDB imports. This schema defines ONLY the MongoDB-specific configuration properties.\n\n**IMPORTANT:** Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level. When generating this configuration, return ONLY the properties defined here (method, collection, ignoreExtract, etc.).","properties":{"method":{"type":"string","enum":["insertMany","updateOne"],"description":"The HTTP method to be used for the MongoDB API request, specifying the type of operation to perform on the database resource. This property dictates how the server interprets and processes the request, directly influencing the action taken on MongoDB collections or documents. Common HTTP methods include GET for retrieving data, POST for creating new entries, PUT or PATCH for updating existing documents, and DELETE for removing records. Each method determines the structure of the request payload, the expected response format, and the side effects on the database. Proper selection of the method ensures alignment with RESTful principles, maintains idempotency where applicable, and supports secure and predictable interactions with the MongoDB service.\n\n**Field behavior**\n- Defines the intended CRUD operation (Create, Read, Update, Delete) on MongoDB resources.\n- Controls server-side processing logic, affecting data retrieval, insertion, modification, or deletion.\n- Influences the presence, format, and content of request bodies and the nature of responses returned.\n- Determines idempotency and safety characteristics of the API call, impacting retry and error handling strategies.\n- Affects authorization and authentication requirements based on the sensitivity of the operation.\n- Impacts caching behavior and network efficiency depending on the method used.\n\n**Implementation guidance**\n- Use standard HTTP methods consistent with RESTful API design: GET for reading data, POST for creating new documents, PUT or PATCH for updating existing documents, and DELETE for removing documents.\n- Validate the method against a predefined set of allowed HTTP methods to ensure API consistency and prevent unsupported operations.\n- Ensure the selected method accurately reflects the intended database operation to avoid unintended data modifications or errors.\n- Consider the idempotency of each method when designing client-side logic, especially for update and delete operations.\n- Include appropriate headers, query parameters, and request body content as required by the specific method and MongoDB operation.\n- Implement robust error handling and validation to manage unsupported or incorrect method usage gracefully.\n- Document method-specific requirements and constraints clearly for API consumers.\n\n**Examples**\n- GET: Retrieve one or more documents from a MongoDB collection without modifying data.\n- POST: Insert a new document into a collection, creating a new resource.\n- PUT: Replace an entire existing document with a new version, ensuring full update.\n- PATCH: Apply partial updates to specific fields within an existing document.\n- DELETE: Remove a document or multiple documents from a collection permanently.\n\n**Important notes**\n- The HTTP method must be compatible with the MongoDB operation"},"collection":{"type":"string","description":"Specifies the exact name of the MongoDB collection where data operations—including insertion, querying, updating, or deletion—will be executed. This property defines the precise target collection within the database, establishing the scope and context of the data affected by the API call. The collection name must strictly adhere to MongoDB's naming conventions to ensure compatibility, prevent conflicts with system-reserved collections, and maintain database integrity. Proper naming facilitates efficient data organization, indexing, query performance, and overall database scalability.\n\n**Field behavior**\n- Identifies the specific MongoDB collection for all CRUD (Create, Read, Update, Delete) operations.\n- Directly determines which subset of data is accessed, modified, or deleted during the API request.\n- Must be a valid, non-empty UTF-8 string that complies with MongoDB collection naming rules.\n- Case-sensitive, meaning collections named \"Users\" and \"users\" are distinct and separate.\n- Influences the application’s data flow and operational context by specifying the data container.\n- Changes to this property dynamically alter the target dataset for the API operation.\n\n**Implementation guidance**\n- Validate that the collection name is a non-empty UTF-8 string without null characters, spaces, or invalid symbols such as '$' or '.' except where allowed.\n- Ensure the name does not start with the reserved prefix \"system.\" to avoid conflicts with MongoDB internal collections.\n- Adopt consistent, descriptive, and meaningful naming conventions to enhance database organization, maintainability, and clarity.\n- Verify the existence of the collection before performing operations; implement robust error handling for cases where the collection does not exist.\n- Consider how the collection name impacts indexing strategies, query optimization, sharding, and overall database performance.\n- Maintain consistency in collection naming across the application to prevent unexpected behavior or data access issues.\n- Avoid overly long or complex names to reduce potential errors and improve readability.\n\n**Examples**\n- \"users\"\n- \"orders\"\n- \"inventory_items\"\n- \"logs_2024\"\n- \"customer_feedback\"\n- \"session_data\"\n- \"product_catalog\"\n\n**Important notes**\n- Collection names are case-sensitive and must be used consistently throughout the application to avoid data inconsistencies.\n- Avoid reserved names and prefixes such as \"system.\" to prevent interference with MongoDB’s internal collections and operations.\n- The choice of collection name can significantly affect database performance, indexing efficiency, and scalability.\n- Renaming a collection requires updating all references in the application and may necessitate data"},"filter":{"type":"string","description":"A MongoDB query filter object used to specify detailed criteria for selecting documents within a collection. This filter defines the exact conditions that documents must satisfy to be included in the query results, leveraging MongoDB's comprehensive and expressive query syntax. It supports a wide range of operators and expressions to enable precise and flexible querying of documents based on field values, existence, data types, ranges, nested properties, array contents, and more. The filter allows for complex logical combinations and can target deeply nested fields using dot notation, making it a powerful tool for fine-grained data retrieval.\n\n**Field behavior**\n- Determines which documents in the MongoDB collection are matched and returned by the query.\n- Supports complex query expressions including comparison operators (`$eq`, `$gt`, `$lt`, `$gte`, `$lte`, `$ne`), logical operators (`$and`, `$or`, `$not`, `$nor`), element operators (`$exists`, `$type`), array operators (`$in`, `$nin`, `$all`, `$elemMatch`), and evaluation operators (`$regex`, `$expr`).\n- Enables filtering based on simple fields, nested document properties using dot notation, and array contents.\n- Supports querying for null values, missing fields, and specific data types.\n- Allows combining multiple conditions using logical operators to form complex queries.\n- If omitted or provided as an empty object, the query defaults to returning all documents in the collection without filtering.\n- Can be used to optimize data retrieval by narrowing down the result set to only relevant documents.\n\n**Implementation guidance**\n- Construct the filter using valid MongoDB query operators and syntax to ensure correct behavior.\n- Validate and sanitize all input used to build the filter to prevent injection attacks or malformed queries.\n- Utilize dot notation to query nested fields within embedded documents or arrays.\n- Optimize query performance by aligning filters with indexed fields in the MongoDB collection.\n- Handle edge cases such as null values, missing fields, and data type variations appropriately within the filter.\n- Test filters thoroughly to ensure they return expected results and handle all relevant data scenarios.\n- Consider the impact of filter complexity on query execution time and resource usage.\n- Use projection and indexing strategies in conjunction with filters to improve query efficiency.\n- Be mindful of MongoDB version compatibility when using advanced operators or expressions.\n\n**Examples**\n- `{ \"status\": \"active\" }` — selects documents where the `status` field exactly matches \"active\".\n- `{ \"age\": { \"$gte\":"},"document":{"type":"string","description":"The main content or data structure representing a single record in a MongoDB database, encapsulated as a JSON-like object known as a document. This document consists of key-value pairs where keys are field names and values can be of various data types, including nested documents, arrays, strings, numbers, booleans, dates, and binary data, allowing for flexible and hierarchical data representation. It serves as the fundamental unit of data storage and retrieval within MongoDB collections and typically includes a unique identifier field (_id) that ensures each document can be distinctly accessed. Documents can contain metadata, support complex nested structures, and vary in schema within the same collection, reflecting MongoDB's schema-less design. This flexibility enables dynamic and evolving data models suited for diverse application needs, supporting efficient querying, indexing, and atomic operations at the document level.\n\n**Field behavior**\n- Represents an individual MongoDB document stored within a collection.\n- Contains key-value pairs with field names as keys and diverse data types as values, including nested documents and arrays.\n- Acts as the primary unit for data storage, retrieval, and manipulation in MongoDB.\n- Includes a mandatory unique identifier field (_id) unless auto-generated by MongoDB.\n- Supports flexible schema, allowing documents within the same collection to have different structures.\n- Can include metadata fields and support indexing for optimized queries.\n- Allows for embedding related data directly within the document to reduce the need for joins.\n- Supports atomic operations at the document level, ensuring consistency during updates.\n\n**Implementation guidance**\n- Ensure compliance with MongoDB BSON format constraints for data types and structure.\n- Validate field names to exclude reserved characters such as '.' and '$' unless explicitly permitted.\n- Support serialization and deserialization processes between application-level objects and MongoDB documents.\n- Enable handling of nested documents and arrays to represent complex data models effectively.\n- Consider indexing frequently queried fields within the document to enhance performance.\n- Monitor document size to stay within MongoDB’s 16MB limit to avoid performance degradation.\n- Use appropriate data types to optimize storage and query efficiency.\n- Implement validation rules or schema enforcement at the application or database level if needed to maintain data integrity.\n\n**Examples**\n- { \"_id\": ObjectId(\"507f1f77bcf86cd799439011\"), \"name\": \"Alice\", \"age\": 30, \"address\": { \"street\": \"123 Main St\", \"city\": \"Metropolis\" } }\n- { \"productId\": \""},"update":{"type":"string","description":"Update operation details specifying the precise modifications to be applied to one or more documents within a MongoDB collection. This field defines the exact changes using MongoDB's rich set of update operators, enabling atomic and efficient modifications to fields, arrays, or embedded documents without replacing entire documents unless explicitly intended. It supports a comprehensive range of update operations such as setting new values, incrementing numeric fields, removing fields, appending or removing elements in arrays, renaming fields, and more, providing fine-grained control over document mutations. Additionally, it accommodates advanced update mechanisms including pipeline-style updates introduced in MongoDB 4.2+, allowing for complex transformations and conditional updates within a single operation. The update specification can also leverage array filters and positional operators to target specific elements within arrays, enhancing the precision and flexibility of updates.\n\n**Field behavior**\n- Specifies the criteria and detailed modifications to apply to matching documents in a collection.\n- Supports a wide array of MongoDB update operators like `$set`, `$unset`, `$inc`, `$push`, `$pull`, `$addToSet`, `$rename`, `$mul`, `$min`, `$max`, and others.\n- Can target single or multiple documents depending on the operation context and options such as `multi` or `upsert`.\n- Ensures atomicity of update operations to maintain data integrity and consistency across concurrent operations.\n- Allows both partial updates using operators and full document replacements when the update object is a complete document without operators.\n- Supports pipeline-style updates for complex, multi-stage transformations and conditional logic within updates.\n- Enables the use of array filters and positional operators to selectively update elements within arrays based on specified conditions.\n- Handles upsert behavior to insert new documents if no existing documents match the update criteria, when enabled.\n\n**Implementation guidance**\n- Validate the update object rigorously to ensure compliance with MongoDB update syntax and semantics, preventing runtime errors.\n- Favor using update operators for partial updates to optimize performance and minimize unintended data overwrites.\n- Implement graceful handling for cases where no documents match the update criteria, optionally supporting upsert behavior to insert new documents if none exist.\n- Explicitly control whether updates affect single or multiple documents to avoid accidental mass modifications.\n- Incorporate robust error handling for invalid update operations, schema conflicts, or violations of database constraints.\n- Consider the impact of update operations on indexes, triggers, and other database mechanisms to maintain overall system performance and data integrity.\n- Support advanced update features such as array filters, positional operators"},"upsert":{"type":"boolean","description":"Upsert is a boolean flag that controls whether a MongoDB update operation should insert a new document if no existing document matches the specified query criteria, or strictly update existing documents only. When set to true, the operation performs an atomic \"update or insert\" action: it updates the matching document if found, or inserts a new document constructed from the update criteria if none exists. This ensures that after the operation, a document matching the criteria will exist in the collection. When set to false, the operation only updates documents that already exist and skips the operation entirely if no match is found, preventing any new documents from being created. This flag is essential for scenarios where maintaining data integrity and avoiding unintended document creation is critical.\n\n**Field behavior**\n- Determines whether the update operation can create a new document when no existing document matches the query.\n- If true, performs an atomic operation that either updates an existing document or inserts a new one.\n- If false, restricts the operation to updating existing documents only, with no insertion.\n- Influences the database state by enabling conditional document creation during update operations.\n- Affects the outcome of update queries by potentially expanding the dataset with new documents.\n\n**Implementation guidance**\n- Use upsert=true when you need to guarantee the existence of a document after the operation, regardless of prior presence.\n- Use upsert=false to restrict changes strictly to existing documents, avoiding unintended document creation.\n- Ensure that when upsert is true, the update document includes all required fields to create a valid new document.\n- Validate that the query filter and update document are logically consistent to prevent unexpected or malformed insertions.\n- Consider the impact on database size, indexing, and performance when enabling upsert, as it may increase document count.\n- Test the behavior in your specific MongoDB driver and version to confirm upsert semantics and compatibility.\n- Be mindful of potential race conditions in concurrent environments that could lead to duplicate documents if not handled properly.\n\n**Examples**\n- `upsert: true` — Update the matching document if found; otherwise, insert a new document based on the update criteria.\n- `upsert: false` — Update only if a matching document exists; skip the operation if no match is found.\n- Using upsert to create a user profile document if it does not exist during an update operation.\n- Employing upsert in configuration management to ensure default settings documents are present and updated as needed.\n\n**Important notes**\n- Upsert operations can increase"},"ignoreExtract":{"type":"string","description":"Enter the path to the field in the source record that should be used to identify existing records. If a value is found for this field, then the source record will be considered an existing record.\n\nThe dynamic field list makes it easy for you to select the field. If a field contains special characters (which may be the case for certain APIs), then the field is enclosed with square brackets [ ], for example, [field-name].\n\n[*] indicates that the specified field is an array, for example, items[*].id. In this case, you should replace * with a number corresponding to the array item. The value for only this array item (not, the entire array) is checked.\n**CRITICAL:** JSON path to the field in the incoming/source record that should be used to determine if the record already exists in the target system.\n\n**This field is ONLY used when `ignoreExisting` is set to true.** Note: `ignoreExisting` is a separate property that is NOT part of this mongodb configuration.\n\nWhen `ignoreExisting: true` is set (at the import level), this field specifies which field from the incoming data should be checked for a value. If that field has a value, the record is considered to already exist and will be skipped.\n\n**When to set this field**\n\n**ALWAYS set this field** when:\n1. The `ignoreExisting` property will be set to `true` (for \"ignore existing\" scenarios)\n2. The user's prompt mentions checking a specific field to identify existing records\n3. The prompt includes phrases like:\n   - \"indicated by the [field] field\"\n   - \"matching the [field] field\"\n   - \"based on [field]\"\n   - \"using the [field] field\"\n   - \"if [field] is populated\"\n   - \"where [field] exists\"\n\n**How to determine the value**\n\n1. **Look for the identifier field in the prompt:**\n   - \"ignoring existing customers indicated by the **id field**\" → `ignoreExtract: \"id\"`\n   - \"skip existing vendors based on the **email field**\" → `ignoreExtract: \"email\"`\n   - \"ignore records where **customerId** is populated\" → `ignoreExtract: \"customerId\"`\n\n2. **Use the field from the incoming/source data** (NOT the target system field):\n   - This is the field path in the data coming INTO the import\n   - Should match a field that will be present in the source records\n\n3. **Apply special character handling:**\n   - If a field contains special characters (hyphens, spaces, etc.), enclose with square brackets: `[field-name]`, `[customer id]`\n   - For array notation, use `[*]` to indicate array items: `items[*].id` (replace `*` with index if needed)\n\n**Field path syntax**\n\n- **Simple field:** `id`, `email`, `customerId`\n- **Nested field:** `customer.id`, `billing.email`\n- **Field with special chars:** `[customer-id]`, `[external_id]`, `[Customer Name]`\n- **Array field:** `items[*].id` (or `items[0].id` for specific index)\n- **Nested in array:** `addresses[*].postalCode`\n\n**Examples**\n\nThese examples show ONLY the mongodb configuration (not the full import structure):\n\n**Example 1: Simple field**\nPrompt: \"Create customers while ignoring existing customers indicated by the id field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"customers\",\n  \"ignoreExtract\": \"id\"\n}\n```\n(Note: `ignoreExisting: true` should also be set, but at a different level - not shown here)\n\n**Example 2: Field with special characters**\nPrompt: \"Import vendors, skip existing ones based on the vendor-code field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"vendors\",\n  \"ignoreExtract\": \"[vendor-code]\"\n}\n```\n\n**Example 3: Nested field**\nPrompt: \"Add accounts, ignore existing where customer.email matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"accounts\",\n  \"ignoreExtract\": \"customer.email\"\n}\n```\n\n**Example 4: No ignoreExisting scenario**\nPrompt: \"Create or update all customer records\"\n```json\n{\n  \"method\": \"updateOne\",\n  \"collection\": \"customers\"\n}\n```\n(ignoreExtract should NOT be set when ignoreExisting is not true)\n\n**Important notes**\n\n- **DO NOT SET** this field if `ignoreExisting` is not true\n- **`ignoreExisting` is NOT part of this mongodb schema** - it belongs at a different level\n- This specifies the SOURCE field, not the target MongoDB field\n- The field path is case-sensitive\n- Must be a valid field that exists in the incoming data\n- Works in conjunction with `ignoreExisting` - both must be set for this feature to work\n- If the specified field has a value in the incoming record, that record is considered existing and will be skipped\n\n**Common patterns**\n\n- \"ignore existing [records] indicated by the **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"skip existing [records] based on **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"if **[field]** is populated\" → Set `ignoreExtract: \"[field]\"`\n- No mention of specific field for identification → May need to use default like \"id\" or \"_id\""},"ignoreLookupFilter":{"type":"string","description":"If you are adding documents to your MongoDB instance and you have the Ignore Existing flag set to true please enter a filter object here to find existing documents in this collection. The value of this field must be a valid JSON string describing a MongoDB filter object in the correct format and with the correct operators. Refer to the MongoDB documentation for the list of valid query operators and the correct filter object syntax.\n**CRITICAL:** A JSON-stringified MongoDB query filter used to search for existing documents in the target collection.\n\nIt defines the criteria to match incoming records against existing MongoDB documents. If the query returns a result, the record is considered \"existing\" and the operation (usually insert) is skipped.\n\n**Critical distinction vs `ignoreExtract`**\n- Use **`ignoreExtract`** when you just want to check if a field exists on the **INCOMING** record (no DB query).\n- Use **`ignoreLookupFilter`** when you need to query the **MONGODB DATABASE** to see if a record exists (e.g., \"check if email exists in DB\").\n\n**When to set this field**\nSet this field when:\n1. Top-level `ignoreExisting` is `true`.\n2. The prompt implies checking the database for duplicates (e.g., \"skip if email already exists\", \"prevent duplicate SKUs\", \"match on email\").\n\n**Format requirements**\n- Must be a valid **JSON string** representing a MongoDB query.\n- Use Handlebars `{{FieldName}}` to reference values from the incoming record.\n- Format: `\"{ \\\"mongoField\\\": \\\"{{incomingField}}\\\" }\"`\n\n**Examples**\n\n**Example 1: Match on a single field**\nPrompt: \"Insert users, skip if email already exists in the database\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"users\",\n  \"ignoreLookupFilter\": \"{\\\"email\\\":\\\"{{email}}\\\"}\"\n}\n```\n\n**Example 2: Match on multiple fields**\nPrompt: \"Add products, ignore if SKU and VerifyID match existing products\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"products\",\n  \"ignoreLookupFilter\": \"{\\\"sku\\\":\\\"{{sku}}\\\", \\\"verify_id\\\":\\\"{{verify_id}}\\\"}\"\n}\n```\n\n**Example 3: Using operators**\nPrompt: \"Import orders, ignore if order_id matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"orders\",\n  \"ignoreLookupFilter\": \"{\\\"order_id\\\": { \\\"$eq\\\": \\\"{{id}}\\\" } }\"\n}\n```\n\n**Important notes**\n- The value MUST be a string (stringify the JSON).\n- Mongo field names (keys) must match the **collection schema**.\n- Handlebars variables (values) must match the **incoming data**."}}},"NetSuite":{"type":"object","description":"Configuration for NetSuite exports","properties":{"lookups":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the category or classification of the lookup being referenced. It defines the nature or kind of data that the lookup represents, enabling the system to handle it appropriately based on its type.\n\n**Field behavior**\n- Determines the category or classification of the lookup.\n- Influences how the lookup data is processed and validated.\n- Helps in filtering or grouping lookups by their type.\n- Typically represented as a string or enumerated value indicating the lookup category.\n\n**Implementation guidance**\n- Define a clear set of allowed values or an enumeration for the type to ensure consistency.\n- Validate the type value against the predefined set during data input or API requests.\n- Use the type property to drive conditional logic or UI rendering related to the lookup.\n- Document all possible type values and their meanings for API consumers.\n\n**Examples**\n- \"country\" — representing a lookup of countries.\n- \"currency\" — representing a lookup of currency codes.\n- \"status\" — representing a lookup of status codes or states.\n- \"category\" — representing a lookup of item categories.\n\n**Important notes**\n- The type value should be consistent across the system to avoid ambiguity.\n- Changing the type of an existing lookup may affect dependent systems or data integrity.\n- The type property is essential for distinguishing between different lookup datasets.\n\n**Dependency chain**\n- May depend on or be referenced by other properties that require lookup data.\n- Used in conjunction with lookup values or keys to provide meaningful data.\n- Can influence validation rules or business logic applied to the lookup.\n\n**Technical details**\n- Typically implemented as a string or an enumerated type in the API schema.\n- Should be indexed or optimized for quick filtering and retrieval.\n- May be case-sensitive or case-insensitive depending on system design.\n- Should be included in API documentation with all valid values listed."},"searchField":{"type":"string","description":"searchField specifies the particular field or attribute within a dataset or index that the search operation should target. This property determines which field the search query will be applied to, enabling focused and efficient retrieval of relevant information.\n\n**Field behavior**\n- Defines the specific field in the data source to be searched.\n- Limits the search scope to the designated field, improving search precision.\n- Can accept field names as strings, corresponding to indexed or searchable attributes.\n- If omitted or null, the search may default to a predefined field or perform a broad search across multiple fields.\n\n**Implementation guidance**\n- Ensure the field name provided matches exactly with the field names defined in the data schema or index.\n- Validate that the field is searchable and supports the type of queries intended.\n- Consider supporting nested or compound field names if the data structure is hierarchical.\n- Provide clear error handling for invalid or unsupported field names.\n- Allow for case sensitivity or insensitivity based on the underlying search engine capabilities.\n\n**Examples**\n- \"title\" — to search within the title field of documents.\n- \"author.name\" — to search within a nested author name field.\n- \"description\" — to search within the description or summary field.\n- \"tags\" — to search within a list or array of tags associated with items.\n\n**Important notes**\n- The effectiveness of the search depends on the indexing and search capabilities of the specified field.\n- Using an incorrect or non-existent field name may result in no search results or errors.\n- Some fields may require special handling, such as tokenization or normalization, to support effective searching.\n- The property is case-sensitive if the underlying system treats field names as such.\n\n**Dependency chain**\n- Dependent on the data schema or index configuration defining searchable fields.\n- May interact with other search parameters such as search query, filters, or sorting.\n- Influences the search engine or database query construction.\n\n**Technical details**\n- Typically represented as a string matching the field identifier in the data source.\n- May support dot notation for nested fields (e.g., \"user.address.city\").\n- Should be sanitized to prevent injection attacks or malformed queries.\n- Used by the search backend to construct field-specific query clauses."},"expression":{"type":"string","description":"Expression used to define the criteria or logic for the lookup operation. This expression typically consists of a string that can include variables, operators, functions, or references to other data elements, enabling dynamic and flexible lookup conditions.\n\n**Field behavior**\n- Specifies the condition or formula that determines how the lookup is performed.\n- Can include variables, constants, operators, and functions to build complex expressions.\n- Evaluated at runtime to filter or select data based on the defined logic.\n- Supports dynamic referencing of other fields or parameters within the lookup context.\n\n**Implementation guidance**\n- Ensure the expression syntax is consistent with the supported expression language or parser.\n- Validate expressions before execution to prevent runtime errors.\n- Support common operators (e.g., ==, !=, >, <, AND, OR) and functions as per the system capabilities.\n- Allow expressions to reference other properties or variables within the lookup scope.\n- Provide clear error messages if the expression is invalid or cannot be evaluated.\n\n**Examples**\n- `\"status == 'active' AND score > 75\"`\n- `\"userRole == 'admin' OR accessLevel >= 5\"`\n- `\"contains(tags, 'urgent')\"`\n- `\"date >= '2024-01-01' AND date <= '2024-12-31'\"`\n\n**Important notes**\n- The expression must be a valid string conforming to the expected syntax.\n- Incorrect or malformed expressions may cause lookup failures or unexpected results.\n- Expressions should be designed to optimize performance, avoiding overly complex or resource-intensive logic.\n- The evaluation context (available variables and functions) should be clearly documented for users.\n\n**Dependency chain**\n- Depends on the lookup context providing necessary variables or data references.\n- May depend on the expression evaluation engine or parser integrated into the system.\n- Influences the outcome of the lookup operation and subsequent data processing.\n\n**Technical details**\n- Typically represented as a string data type.\n- Parsed and evaluated by an expression engine or interpreter at runtime.\n- May support standard expression languages such as SQL-like syntax, JSONPath, or custom domain-specific languages.\n- Should handle escaping and quoting of string literals within the expression."},"resultField":{"type":"string","description":"resultField specifies the name of the field in the lookup result that contains the desired value to be retrieved or used.\n\n**Field behavior**\n- Identifies the specific field within the lookup result data to extract.\n- Determines which piece of information from the lookup response is returned or processed.\n- Must correspond to a valid field name present in the lookup result structure.\n- Used to map or transform lookup data into the expected output format.\n\n**Implementation guidance**\n- Ensure the field name matches exactly with the field in the lookup result, including case sensitivity if applicable.\n- Validate that the specified field exists in all possible lookup responses to avoid runtime errors.\n- Use this property to customize which data from the lookup is utilized downstream.\n- Consider supporting nested field paths if the lookup result is a complex object.\n\n**Examples**\n- \"email\" — to extract the email address field from the lookup result.\n- \"userId\" — to retrieve the user identifier from the lookup data.\n- \"address.city\" — to access a nested city field within an address object in the lookup result.\n\n**Important notes**\n- Incorrect or misspelled field names will result in missing or null values.\n- The field must be present in the lookup response schema; otherwise, the lookup will not yield useful data.\n- This property does not perform any transformation; it only selects the field to be used.\n\n**Dependency chain**\n- Depends on the structure and schema of the lookup result data.\n- Used in conjunction with the lookup operation that fetches the data.\n- May influence downstream processing or mapping logic that consumes the lookup output.\n\n**Technical details**\n- Typically a string representing the key or path to the desired field in the lookup result object.\n- May support dot notation for nested fields if supported by the implementation.\n- Should be validated against the lookup result schema during configuration or runtime."},"includeInactive":{"type":"boolean","description":"IncludeInactive: Specifies whether to include inactive items in the lookup results.\n**Field behavior**\n- When set to true, the lookup results will contain both active and inactive items.\n- When set to false or omitted, only active items will be included in the results.\n- Affects the filtering logic applied during data retrieval for lookups.\n**Implementation guidance**\n- Default the value to false if not explicitly provided to avoid unintended inclusion of inactive items.\n- Ensure that the data source supports filtering by active/inactive status.\n- Validate the input to accept only boolean values.\n- Consider performance implications when including inactive items, especially if the dataset is large.\n**Examples**\n- includeInactive: true — returns all items regardless of their active status.\n- includeInactive: false — returns only items marked as active.\n**Important notes**\n- Including inactive items may expose deprecated or obsolete data; use with caution.\n- The definition of \"inactive\" depends on the underlying data model and should be clearly documented.\n- This property is typically used in administrative or audit contexts where full data visibility is required.\n**Dependency chain**\n- Dependent on the data source’s ability to distinguish active vs. inactive items.\n- May interact with other filtering or pagination parameters in the lookup API.\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or request body field in lookup API calls.\n- Requires backend support to filter data based on active status flags or timestamps."},"_id":{"type":"object","description":"_id: The unique identifier for the lookup entry within the system. This identifier is typically a string or an ObjectId that uniquely distinguishes each lookup record from others in the database or dataset.\n**Field behavior**\n- Serves as the primary key for the lookup entry.\n- Must be unique across all lookup entries.\n- Immutable once assigned; should not be changed to maintain data integrity.\n- Used to reference the lookup entry in queries, updates, and deletions.\n**Implementation guidance**\n- Use a consistent format for the identifier, such as a UUID or database-generated ObjectId.\n- Ensure uniqueness by leveraging database constraints or application-level checks.\n- Assign the _id at the time of creation of the lookup entry.\n- Avoid exposing internal _id values directly to end-users unless necessary.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"lookup_001\"\n**Important notes**\n- The _id field is critical for data integrity and should be handled with care.\n- Changing the _id after creation can lead to broken references and data inconsistencies.\n- In some systems, the _id may be automatically generated by the database.\n**Dependency chain**\n- May be referenced by other fields or entities that link to the lookup entry.\n- Dependent on the database or storage system's method of generating unique identifiers.\n**Technical details**\n- Typically stored as a string or a specialized ObjectId type depending on the database.\n- Indexed to optimize query performance.\n- Often immutable and enforced by database schema constraints."},"default":{"type":["object","null"],"description":"A boolean property indicating whether the lookup is the default selection among multiple lookup options.\n\n**Field behavior**\n- Determines if this lookup is automatically selected or used by default in the absence of user input.\n- Only one lookup should be marked as default within a given context to avoid ambiguity.\n- Influences the initial state or value presented to the user or system when multiple lookups are available.\n\n**Implementation guidance**\n- Set to `true` for the lookup that should be the default choice; all others should be `false` or omitted.\n- Validate that only one lookup per group or context has `default` set to `true`.\n- Use this property to pre-populate fields or guide user selections in UI or processing logic.\n\n**Examples**\n- `default: true` — This lookup is the default selection.\n- `default: false` — This lookup is not the default.\n- Omitted `default` property implies the lookup is not the default.\n\n**Important notes**\n- If multiple lookups have `default` set to `true`, the system behavior may be unpredictable.\n- The absence of a `default` lookup means no automatic selection; user input or additional logic is required.\n- This property is typically optional but recommended for clarity in multi-lookup scenarios.\n\n**Dependency chain**\n- Related to the `lookups` array or collection where multiple lookup options exist.\n- May influence UI components or backend logic that rely on default selections.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default value: Typically `false` if omitted.\n- Should be included at the same level as other lookup properties within the `lookups` object."}},"description":"A comprehensive collection of lookup tables or reference data sets designed to support and enhance the main data model or application logic. These lookup tables provide predefined, authoritative sets of values that can be consistently referenced throughout the system to ensure data uniformity, minimize redundancy, and facilitate robust data validation and user interface consistency. They serve as centralized repositories for static or infrequently changing data that underpin dynamic application behavior, such as dropdown options, status codes, categorizations, and mappings. By centralizing this reference data, the system promotes maintainability, reduces errors, and enables seamless integration across different modules and services. Lookup tables may also support complex data structures, localization, versioning, and hierarchical relationships to accommodate diverse application requirements and evolving business rules.\n\n**Field behavior**\n- Contains multiple distinct lookup tables, each identified by a unique name representing a specific domain or category of reference data.\n- Each lookup table comprises structured entries, which may be simple key-value pairs or complex objects with multiple attributes, defining valid options, mappings, or metadata.\n- Lookup tables standardize inputs, support UI components like dropdowns and filters, enforce data integrity, and drive conditional logic across the application.\n- Typically static or updated infrequently, ensuring stability and consistency in dependent processes.\n- Supports localization or internationalization to provide values in multiple languages or regional formats.\n- May represent hierarchical or relational data structures within lookup tables to capture complex relationships.\n- Enables versioning to track changes over time and facilitate rollback, auditing, or backward compatibility.\n- Acts as a single source of truth for reference data, reducing duplication and discrepancies across systems.\n- Can be extended or customized to meet specific domain or organizational needs without impacting core application logic.\n\n**Implementation guidance**\n- Organize as a dictionary or map where each key corresponds to a lookup table name and the associated value contains the set of entries.\n- Implement efficient loading, caching, and retrieval mechanisms to optimize performance and minimize latency.\n- Provide controlled update or refresh capabilities to allow seamless modifications without disrupting ongoing application operations.\n- Enforce strict validation rules on lookup entries to prevent invalid, inconsistent, or duplicate data.\n- Incorporate support for localization and internationalization where applicable, enabling multi-language or region-specific values.\n- Maintain versioning or metadata to track changes and support backward compatibility.\n- Consider access control policies if lookup data includes sensitive or restricted information.\n- Design APIs or interfaces for easy querying, filtering, and retrieval of lookup data by client applications.\n- Ensure synchronization mechanisms are in place"},"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"Specifies the type of operation to perform on NetSuite records. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create a new record. Use when importing new records only.\n- \"update\" — Update an existing record. Requires internalIdLookup to find the record.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. This is the most common operation.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete a record. Requires internalIdLookup to find the record.\n\n**Implementation guidance**\n- Value is automatically lowercased by the API.\n- For \"addupdate\", \"update\", and \"delete\", you MUST also set internalIdLookup to specify how to find existing records.\n- For \"add\" with ignoreExisting, set internalIdLookup to check for duplicates before creating.\n- Default to \"addupdate\" when the user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when the user says \"create\" or \"insert\" without mentioning updates.\n\n**Examples**\n- \"addupdate\" — for syncing records (create or update)\n- \"add\" — for creating new records only\n- \"update\" — for updating existing records only\n- \"delete\" — for removing records\n\n**Important notes**\n- This field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\n- Do NOT wrap it in an object like {\"type\": \"addupdate\"} — that will cause a validation error."},"isFileProvider":{"type":"boolean","description":"isFileProvider indicates whether the NetSuite integration functions as a file provider, enabling comprehensive capabilities to access, manage, and manipulate files within the NetSuite environment. When enabled, this property grants the integration full file-related operations such as browsing directories, uploading new files, downloading existing files, updating file metadata, and deleting files directly through the integration interface. This functionality is essential for workflows that require seamless, automated interaction with NetSuite's file storage system, facilitating efficient document handling, version control, and integration with external file management processes. It also impacts the availability of specific API endpoints and user interface components related to file management, and may influence system performance and API rate limits due to the volume of file operations.\n\n**Field behavior**\n- When set to true, activates full file management features including browsing, uploading, downloading, updating, and deleting files within NetSuite.\n- When false or omitted, disables all file provider functionalities, preventing any file-related operations through the integration.\n- Acts as a toggle controlling the availability of file-centric capabilities and related API endpoints.\n- Changes typically require re-authentication or reinitialization to apply updated file access permissions correctly.\n- May affect integration performance and API rate limits due to potentially high file operation volumes.\n\n**Implementation guidance**\n- Enable only if direct, comprehensive interaction with NetSuite files is required.\n- Ensure the integration has appropriate NetSuite permissions, roles, and OAuth scopes for secure file access and manipulation.\n- Validate strictly as a boolean to prevent misconfiguration.\n- Assess security implications thoroughly, enforcing strict access controls and audit logging when enabled.\n- Monitor API usage and system performance closely, as file operations can increase load and impact rate limits.\n- Coordinate with NetSuite administrators to align file access policies with organizational security and compliance standards.\n- Consider effects on middleware and downstream systems handling file transfers, error handling, and synchronization.\n\n**Examples**\n- `isFileProvider: true` — Enables full file provider capabilities, allowing browsing, uploading, downloading, and managing files within NetSuite.\n- `isFileProvider: false` — Disables all file-related functionalities, restricting the integration to non-file operations only.\n\n**Important notes**\n- Enabling file provider functionality may require additional NetSuite OAuth scopes, roles, or permissions specifically related to file access.\n- File operations can significantly impact API rate limits and quotas; plan and monitor usage accordingly.\n- This property exclusively controls file management capabilities and does not affect other integration functionalities."},"customFieldMetadata":{"type":"object","description":"Metadata information related to custom fields defined within the NetSuite environment, offering a comprehensive and detailed overview of each custom field's attributes, configuration settings, and operational parameters. This includes critical details such as the field type (e.g., checkbox, select, date, text), label, internal ID, source lists or records for select-type fields, default values, validation rules, display properties, and behavioral flags like mandatory, read-only, or hidden status. The metadata acts as an authoritative reference for accurately managing, validating, and utilizing custom fields during integrations, data processing, and API interactions, ensuring that custom field data is interpreted and handled precisely according to its defined structure, constraints, and business logic. It also enables dynamic adaptation to changes in custom field configurations, supporting robust and flexible integration workflows that maintain data integrity and enforce business rules effectively.\n\n**Field behavior**\n- Provides exhaustive metadata for each custom field, including type, label, internal ID, source references, default values, validation criteria, and display properties.\n- Reflects the current and authoritative configuration and constraints of custom fields within the NetSuite account.\n- Enables dynamic validation, processing, and rendering of custom fields during data import, export, and API operations.\n- Supports understanding of field dependencies, mandatory status, read-only or hidden flags, and other behavioral characteristics.\n- Facilitates enforcement of business rules and data integrity through validation and conditional logic based on metadata.\n\n**Implementation guidance**\n- Ensure this property is consistently populated with accurate, up-to-date, and comprehensive metadata for all relevant custom fields in the integration context.\n- Regularly synchronize and refresh metadata with the NetSuite environment to capture any additions, modifications, or deletions of custom fields.\n- Leverage this metadata to validate data payloads before transmission and to correctly interpret and process received data.\n- Use metadata details to implement field-specific logic, such as enforcing mandatory fields, applying validation rules, handling default values, and respecting read-only or hidden statuses.\n- Consider caching metadata where appropriate to optimize performance, while ensuring mechanisms exist to detect and apply updates promptly.\n\n**Examples**\n- Field type: \"checkbox\", label: \"Is Active\", id: \"custentity_is_active\", mandatory: true\n- Field type: \"select\", label: \"Customer Type\", id: \"custentity_customer_type\", sourceList: \"customerTypes\", readOnly: false\n- Field type: \"date\", label: \"Contract Start Date\", id: \"custentity_contract_start"},"recordType":{"type":"string","description":"recordType specifies the exact type of record within the NetSuite system that the API operation will interact with. This property is critical as it defines the schema, fields, validation rules, workflows, and behaviors applicable to the record being accessed, created, updated, or deleted. By accurately specifying the recordType, the API can correctly interpret the structure of the data payload, enforce appropriate business logic, and route the request to the correct processing module within NetSuite. This ensures precise data manipulation, retrieval, and integration aligned with the intended record context. Proper use of recordType enables seamless interaction with both standard and custom NetSuite records, supporting robust and flexible API operations across diverse business scenarios.\n\n**Field behavior**\n- Defines the specific NetSuite record type for the API request (e.g., customer, salesOrder, invoice).\n- Directly influences validation rules, available fields, permissible operations, and workflows in the request or response.\n- Determines the context, permissions, and business logic applicable to the record.\n- Must be set accurately and consistently to ensure correct processing and avoid errors.\n- Affects how the API interprets, validates, and processes the associated data payload.\n- Supports interaction with both standard and custom record types configured in the NetSuite account.\n- Enables the API to dynamically adapt to different record structures and business processes based on the recordType.\n\n**Implementation guidance**\n- Use the exact NetSuite internal record type identifiers as defined in official NetSuite documentation and account-specific customizations.\n- Validate the recordType value against the list of supported record types before processing the request to prevent errors.\n- Ensure compatibility of the recordType with the specific API endpoint, operation, and NetSuite account configuration.\n- Implement robust error handling for missing, invalid, or unsupported recordType values, providing clear and actionable feedback.\n- Regularly update the recordType list to reflect changes in NetSuite releases, custom record definitions, and account-specific configurations.\n- Consider role-based access and feature enablement that may affect the availability or behavior of certain record types.\n- When dealing with custom records, verify that the recordType matches the custom record’s internal ID and that the API user has appropriate permissions.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"itemFulfillment\"\n- \"vendorBill\"\n- \"purchaseOrder\"\n- \"customRecordType123\" (example of a custom record)\n\n**Important notes**\n- The recordType value is case"},"recordTypeId":{"type":"string","description":"recordTypeId is the unique identifier that specifies the exact type or category of a record within the NetSuite system. It defines the structure, fields, validation rules, workflows, and behavior applicable to the record instance, ensuring that API operations target the correct record schema and processing logic. This identifier is essential for routing requests to the appropriate record handlers, enforcing data integrity, and validating the data according to the specific record type's requirements. Once set for a record instance, the recordTypeId is immutable, as changing it would imply a fundamentally different record type and could lead to inconsistencies or errors in processing.\n\n**Field behavior**\n- Specifies the precise category or type of NetSuite record being accessed or manipulated.\n- Determines the applicable schema, fields, validation rules, workflows, and business logic for the record.\n- Directs API requests to the correct processing logic, endpoint, and validation routines based on record type.\n- Immutable for a given record instance; changing it indicates a different record type and is not permitted.\n- Influences permissions, access control, and integration behaviors tied to the record type.\n- Affects how related records and dependencies are handled within the system.\n\n**Implementation guidance**\n- Use the exact internal string ID or numeric identifier as defined by NetSuite for record types.\n- Validate the recordTypeId against the current list of supported, enabled, and custom NetSuite record types before processing.\n- Ensure consistency and compatibility between recordTypeId and related fields or data structures within the payload.\n- Implement robust error handling for invalid, unsupported, deprecated, or mismatched recordTypeId values.\n- Keep the recordTypeId definitions synchronized with updates, customizations, and configurations from the NetSuite account to avoid mismatches.\n- Consider case sensitivity and formatting rules as dictated by the NetSuite API specifications.\n- When integrating with custom record types, verify that custom IDs conform to naming conventions and are properly registered in the NetSuite environment.\n\n**Examples**\n- \"customer\" — representing a standard customer record type.\n- \"salesOrder\" — representing a sales order record type.\n- \"invoice\" — representing an invoice record type.\n- Numeric identifiers such as \"123\" if the system uses numeric codes for record types.\n- Custom record type IDs defined within a specific NetSuite account, e.g., \"customRecordType_456\".\n- \"employee\" — representing an employee record type.\n- \"purchaseOrder\" — representing a purchase order record type.\n\n**Important notes**"},"retryUpdateAsAdd":{"type":"boolean","description":"A boolean flag that controls whether failed update operations in NetSuite integrations should be automatically retried as add operations. When set to true, if an update attempt fails—commonly because the target record does not exist—the system will attempt to add the record instead. This mechanism helps maintain data synchronization by addressing scenarios where records may have been deleted, never created, or are otherwise missing, thereby reducing manual intervention and minimizing data discrepancies. It ensures smoother integration workflows by providing a fallback strategy specifically for update failures, enhancing resilience and data consistency.\n\n**Field behavior**\n- Governs retry logic exclusively for update operations within NetSuite integrations.\n- When true, triggers an automatic retry as an add operation upon update failure.\n- When false or unset, update failures result in immediate error responses without retries.\n- Facilitates handling of missing or deleted records to improve synchronization accuracy.\n- Does not influence initial add operations or other API call types.\n- Activates retry logic only after detecting a failed update attempt.\n\n**Implementation guidance**\n- Enable this flag when there is a possibility that update requests target non-existent records.\n- Evaluate the potential for duplicate record creation if the add operation is not idempotent.\n- Implement comprehensive error handling and logging to monitor retry attempts and outcomes.\n- Conduct thorough testing in staging or development environments before deploying to production.\n- Coordinate with other retry or error recovery configurations to avoid conflicting behaviors.\n- Ensure accurate detection of update failure causes to apply retries appropriately and avoid unintended consequences.\n\n**Examples**\n- `retryUpdateAsAdd: true` — Automatically retries adding the record if an update fails due to a missing record.\n- `retryUpdateAsAdd: false` — Update failures are reported immediately without retrying as add operations.\n\n**Important notes**\n- Enabling this flag may lead to duplicate records if update failures occur for reasons other than missing records.\n- Use with caution in environments requiring strict data integrity, auditability, and compliance.\n- This flag only modifies retry behavior after an update failure; it does not alter the initial add or update request logic.\n- Accurate failure cause analysis is critical to ensure retries are applied correctly and safely.\n\n**Dependency chain**\n- Relies on execution of update operations against NetSuite records.\n- Works in conjunction with error detection and handling mechanisms to identify failure reasons.\n- Commonly used alongside other retry policies or error recovery settings to enhance integration robustness.\n\n**Technical details**\n- Data type: boolean\n- Default value: false"},"batchSize":{"type":"number","description":"Specifies the number of records to process in a single batch operation when interacting with the NetSuite API, directly influencing the efficiency, performance, and resource utilization of data processing tasks. This parameter controls how many records are sent or retrieved in one API call, balancing throughput with system constraints such as memory usage, network latency, and API rate limits. Proper configuration of batchSize is essential to optimize processing speed while minimizing the risk of errors, timeouts, or throttling by the NetSuite API. Adjusting batchSize allows for fine-tuning of integration workflows to accommodate varying workloads, system capabilities, and API restrictions, ensuring reliable and efficient data synchronization.  \n**Field behavior:**  \n- Determines the count of records included in each batch request to the NetSuite API.  \n- Directly impacts the number of API calls required to complete processing of all records.  \n- Larger batch sizes can improve throughput by reducing the number of calls but may increase memory consumption, processing latency, and risk of timeouts per call.  \n- Smaller batch sizes reduce memory footprint and improve responsiveness but may increase the total number of API calls and associated overhead.  \n- Influences error handling complexity, as failures in larger batches may require more extensive retries or partial processing logic.  \n- Affects how quickly data is processed and synchronized between systems.  \n**Implementation** GUIDANCE:  \n- Select a batch size that balances optimal performance with available system resources, network conditions, and API constraints.  \n- Conduct thorough performance testing with varying batch sizes to identify the most efficient configuration for your specific workload and environment.  \n- Ensure the batch size complies with NetSuite API limits, including maximum records per request and rate limiting policies.  \n- Implement robust error handling to manage partial failures within batches, including retry mechanisms, logging, and potential batch splitting.  \n- Consider the complexity, size, and processing time of individual records when determining batch size, as more complex or larger records may require smaller batches.  \n- Monitor runtime metrics and adjust batchSize dynamically if possible, based on system load, API response times, and error rates.  \n**Examples:**  \n- 100 (process 100 records per batch for balanced throughput and resource use)  \n- 500 (process 500 records per batch to maximize throughput in high-capacity environments)  \n- 50 (smaller batch size suitable for environments with limited memory, stricter API limits, or higher error sensitivity)  \n**Important notes:**  \n- The batchSize setting"},"internalIdLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Extract the relevant internal ID from the provided NetSuite data response to uniquely identify and reference specific records for subsequent API operations. This involves accurately parsing the response data—whether in JSON, XML, or other formats—to isolate the internal ID, which serves as a critical key for actions such as updates, deletions, or detailed queries within NetSuite. The extraction process must be precise, resilient to variations in response structures, and capable of handling nested or complex data formats to ensure reliable downstream processing and prevent errors in record manipulation.  \n**Field behavior:**  \n- Extracts a unique internal ID value from NetSuite API responses or data objects.  \n- Acts as the primary identifier for referencing and manipulating NetSuite records.  \n- Typically invoked after lookup, search, or query operations to obtain the necessary ID for further API calls.  \n- Supports handling of multiple data formats including JSON, XML, and potentially others.  \n- Ensures the extracted ID is valid, correctly formatted, and usable for subsequent operations.  \n- Handles nested or complex response structures to reliably locate the internal ID.  \n- Operates as a read-only field derived exclusively from API response data.  \n\n**Implementation guidance:**  \n- Implement robust and flexible parsing logic tailored to the expected NetSuite response formats.  \n- Validate the extracted internal ID against expected patterns (e.g., numeric strings) to ensure correctness.  \n- Incorporate comprehensive error handling for cases where the internal ID is missing, malformed, or unexpected.  \n- Design the extraction mechanism to be configurable and adaptable to changes in NetSuite API response schemas or record types.  \n- Support asynchronous processing if API responses are received asynchronously or in streaming formats.  \n- Log extraction attempts and failures to facilitate debugging and maintenance.  \n- Regularly update extraction logic to accommodate API version changes or schema updates.  \n\n**Examples:**  \n- Extracting `\"internalId\": \"12345\"` from a JSON response such as `{ \"internalId\": \"12345\", \"name\": \"Sample Record\" }`.  \n- Parsing nested XML elements or attributes to retrieve the internalId value, e.g., `<record internalId=\"12345\">`.  \n- Using the extracted internal ID to perform update, delete, or detailed fetch operations on a NetSuite record.  \n- Handling cases where the internal ID is embedded within deeply nested or complex response objects.  \n\n**Important notes:**  \n- The internal ID uniquely identifies records within NetSuite and is essential for accurate and reliable API"},"searchField":{"type":"string","description":"The specific field within a NetSuite record that serves as the key attribute for performing an internal ID lookup search. This field determines which property of the record will be queried to locate and retrieve the corresponding internal ID, thereby directly influencing the precision, relevance, and efficiency of the lookup operation. Selecting an appropriate searchField is critical for ensuring accurate matches and optimal performance, as it typically should be a unique, indexed, or otherwise optimized attribute within the record schema. The choice of searchField impacts not only the speed of the search but also the uniqueness and reliability of the results returned, making it essential to align with the record type, user permissions, and any customizations present in the NetSuite environment. Proper selection of this field helps avoid ambiguous results and enhances the overall robustness of the lookup process.\n\n**Field behavior**\n- Identifies the exact attribute of the NetSuite record to be searched.\n- Directs the lookup process to compare the provided search value against this field.\n- Affects the accuracy, relevance, and uniqueness of the internal ID retrieved.\n- Commonly corresponds to fields that are indexed or uniquely constrained for efficient querying.\n- Determines the scope and filtering of the search within the record type.\n- Influences how the system handles multiple matches or ambiguous results.\n- Must correspond to a searchable field that the user has permission to access.\n\n**Implementation guidance**\n- Use only valid, supported field names as defined in the NetSuite record schema.\n- Prefer fields that are indexed or optimized for search to enhance lookup speed.\n- Verify the existence and searchability of the field on the specific record type being queried.\n- Choose fields that minimize duplicates to avoid ambiguous or multiple results.\n- Ensure exact case sensitivity and spelling alignment with NetSuite’s API definitions.\n- Test the field’s searchability considering user permissions, record configurations, and any customizations.\n- Avoid fields that are write-only, restricted, or non-searchable due to NetSuite permissions or settings.\n- Consider the impact of custom fields or scripts that may alter standard field behavior.\n- Validate that the field supports the type of search operation intended (e.g., exact match, partial match).\n\n**Examples**\n- \"externalId\" — to locate records by an externally assigned identifier.\n- \"email\" — to find records using an email address attribute.\n- \"name\" — to search by the record’s name or title field.\n- \"tranId\" — to query transaction records by their transaction ID.\n- \"entityId"},"expression":{"type":"string","description":"A string representing a search expression used to filter or query records within the NetSuite system based on specific criteria. This expression defines the precise conditions that records must meet to be included in the search results, enabling targeted and efficient data retrieval. It supports a rich syntax including logical operators (AND, OR, NOT), comparison operators (=, !=, >, <, >=, <=), field references, and literal values. Expressions can be nested to form complex queries tailored to specific business needs, allowing for granular control over the search logic. This property is essential for performing internal lookups by matching records against the defined criteria, ensuring that only relevant records are returned.\n\n**Field behavior**\n- Defines the criteria for searching or filtering records by specifying one or more conditions.\n- Supports logical operators such as AND, OR, and NOT to combine multiple conditions logically.\n- Allows inclusion of field names, comparison operators, and literal values to construct meaningful expressions.\n- Enables nested expressions for advanced querying capabilities.\n- Used internally to perform lookups by matching records against the defined expression.\n- Returns records that satisfy all specified conditions within the expression.\n- Evaluates expressions dynamically at runtime to reflect current data states.\n- Supports referencing related record fields to enable cross-record filtering.\n\n**Implementation guidance**\n- Ensure the expression syntax strictly conforms to NetSuite’s search expression format and supported operators.\n- Validate the expression prior to execution to catch syntax errors and prevent runtime failures.\n- Use parameterized values or safe input handling to mitigate injection risks and enhance security.\n- Support and correctly parse nested expressions to handle complex query scenarios.\n- Implement graceful handling for cases where the expression yields no matching records, avoiding errors.\n- Optimize expressions for performance by limiting complexity and avoiding overly broad criteria.\n- Provide clear error messages or feedback when expressions are invalid or unsupported.\n- Consider caching frequently used expressions to improve lookup efficiency.\n\n**Examples**\n- `\"status = 'Open' AND type = 'Sales Order'\"`\n- `\"createdDate >= '2023-01-01' AND createdDate <= '2023-12-31'\"`\n- `\"customer.internalId = '12345' OR customer.email = 'example@example.com'\"`\n- `\"NOT (status = 'Closed' OR status = 'Cancelled')\"`\n- `\"amount > 1000 AND (priority = 'High' OR priority = 'Urgent')\"`\n- `\"item.category = 'Electronics' AND quantity >= 10\"`\n- `\"lastModifiedDate >"}},"description":"internalIdLookup is a boolean property that specifies whether the lookup operation should utilize the unique internal ID assigned by NetSuite to an entity for record retrieval. When set to true, the system treats the provided identifier strictly as this immutable internal ID, enabling precise and unambiguous access to the exact record. This approach is essential in scenarios demanding high accuracy and reliability, as internal IDs are guaranteed to be unique within the NetSuite environment and remain consistent over time. If set to false or omitted, the lookup defaults to using external identifiers, display names, or other non-internal keys, which may be less precise and could lead to multiple matches or ambiguity in the results.\n\n**Field behavior**\n- Activates strict matching using NetSuite’s unique internal ID when true.\n- Defaults to external or display identifier matching when false or not specified.\n- Changes the lookup logic and matching criteria based on the flag’s value.\n- Ensures efficient, accurate, and unambiguous record retrieval by leveraging stable internal identifiers.\n\n**Implementation guidance**\n- Enable only when the exact internal ID of the target record is known, verified, and appropriate for the lookup.\n- Avoid setting to true if only external or display identifiers are available to prevent failed or incorrect lookups.\n- Confirm that the internal ID corresponds to the correct record type to avoid mismatches or errors.\n- Validate the format and existence of the internal ID before performing the lookup operation.\n- Implement comprehensive error handling to manage cases where the internal ID is invalid, missing, or does not correspond to any record.\n\n**Examples**\n- `internalIdLookup: true` — Retrieve a customer record by its NetSuite internal ID for precise identification.\n- `internalIdLookup: false` — Search for an inventory item using its external SKU or descriptive name.\n- Omitting `internalIdLookup` defaults to false, causing the system to perform lookups based on external or display identifiers.\n\n**Important notes**\n- Internal IDs are stable, unique, and system-generated identifiers within NetSuite, providing the most reliable reference for records.\n- Using internal IDs can improve lookup performance and reduce ambiguity compared to relying on external or display identifiers.\n- Incorrectly setting this flag to true with non-internal IDs will likely result in lookup failures or no matching records found.\n- This property is specific to NetSuite integrations and may not be applicable or recognized in other systems or contexts.\n- Ensure synchronization between the internal ID used and the expected record type to maintain data integrity and"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"ignoreReadOnlyFields specifies whether the system should bypass any attempts to modify fields designated as read-only during data processing or update operations. When set to true, the system silently ignores changes to these protected fields, preventing errors or exceptions that would otherwise occur if such modifications were attempted. This behavior facilitates smoother integration and more resilient data handling, especially in environments where partial updates are common or where read-only constraints are strictly enforced. Conversely, when set to false, the system will attempt to update all fields, including those marked as read-only, which may result in errors or exceptions if such modifications are not permitted.  \n**Field behavior:**  \n- Controls whether read-only fields are excluded from update or modification attempts.  \n- When true, modifications to read-only fields are skipped silently without raising errors or exceptions.  \n- When false, attempts to modify read-only fields may cause errors, exceptions, or transaction failures.  \n- Helps maintain system stability by preventing unauthorized or unsupported changes to protected data.  \n- Influences how partial updates and patch operations are processed in the presence of read-only constraints.  \n- Affects error reporting and feedback mechanisms during data validation and update workflows.  \n\n**Implementation guidance:**  \n- Enable this flag to enhance robustness and fault tolerance when integrating with APIs or systems enforcing read-only field restrictions.  \n- Use in scenarios where it is acceptable or expected that read-only fields remain unchanged during update operations.  \n- Ensure downstream business logic, validation rules, and data consistency checks accommodate the possibility that some fields may not be updated.  \n- Carefully evaluate the impact on data integrity, audit requirements, and compliance before enabling this behavior.  \n- Coordinate with audit, logging, and monitoring mechanisms to accurately capture which fields were updated, skipped, or ignored.  \n- Consider the implications for user feedback and error handling in client applications consuming the API.  \n\n**Examples:**  \n- ignoreReadOnlyFields: true — Update operations will silently skip read-only fields, avoiding errors and allowing partial updates to succeed.  \n- ignoreReadOnlyFields: false — Update operations will attempt to modify all fields, potentially triggering errors or exceptions if read-only fields are included.  \n- In a bulk update scenario, setting this to true prevents the entire batch from failing due to attempts to modify read-only fields.  \n- When performing a patch request, enabling this flag allows the system to apply only permissible changes without interruption.  \n\n**Important notes:**  \n- Setting this flag to true does not grant permission to"},"warningAsError":{"type":"boolean","description":"Indicates whether warnings encountered during processing should be escalated and treated as errors, causing the operation to fail immediately upon any warning detection. This setting enforces stricter validation and operational integrity by preventing processes from completing successfully if any warnings arise, thereby ensuring higher data quality, compliance standards, and system reliability. When enabled, it transforms non-critical warnings into blocking errors, which can halt workflows, trigger rollbacks, or initiate error handling routines. This behavior is particularly valuable in environments where data accuracy and regulatory adherence are paramount, but it may also increase failure rates and require more robust error management and user communication strategies.\n\n**Field behavior**\n- When set to true, any warning triggers an error state, halting or failing the operation immediately.\n- When set to false, warnings are logged or reported but do not interrupt the workflow or cause failure.\n- Influences validation, data processing, integration, and transactional workflows where warnings might otherwise be non-blocking.\n- Affects error reporting mechanisms and may alter the flow of retries, rollbacks, or aborts.\n- Can change the overall system behavior by elevating the severity of warnings to errors.\n- Impacts downstream processes by potentially preventing partial or inconsistent data states.\n\n**Implementation guidance**\n- Use to enforce strict data integrity and prevent continuation of processes with potential issues.\n- Assess the impact on user experience, as enabling this may increase failure rates and necessitate enhanced error handling and user notifications.\n- Ensure client applications and integrations are designed to handle errors resulting from escalated warnings gracefully.\n- Clearly document the default behavior when this flag is unset to avoid ambiguity for users and developers.\n- Coordinate with logging, monitoring, and alerting systems to provide clear diagnostics and traceability when warnings are treated as errors.\n- Consider providing configuration options at multiple levels (user, project, global) to allow flexible enforcement.\n- Test thoroughly in staging environments to understand the operational impact before enabling in production.\n\n**Examples**\n- warningAsError: true — Data import fails immediately if any warnings are detected during validation, preventing partial or potentially corrupt data ingestion.\n- warningAsError: false — Warnings during data validation are logged but do not stop the import process, allowing for continued operation despite minor issues.\n- warningAsError: true — Integration workflows abort on any warning to maintain strict compliance with regulatory or business rules.\n- warningAsError: false — Batch processing continues despite warnings, with issues flagged for later review.\n- warningAsError: true —"},"skipCustomMetadataRequests":{"type":"boolean","description":"Indicates whether to bypass requests for custom metadata during API operations, enabling optimized performance by reducing unnecessary data retrieval and processing when custom metadata is not required. This flag allows fine-tuning of the API's behavior to specific use cases by controlling the inclusion of potentially resource-intensive custom metadata. By skipping these requests, the system can achieve faster response times, lower bandwidth usage, and reduced computational overhead, especially beneficial in high-throughput or performance-sensitive scenarios. Conversely, including custom metadata ensures comprehensive data context, which is critical for operations requiring full validation, auditing, or enrichment.  \n**Field behavior:**  \n- When set to true, the system omits fetching and processing any custom metadata associated with the request, improving efficiency and reducing payload size.  \n- When set to false or left unspecified, the system retrieves and processes all relevant custom metadata, ensuring complete data context.  \n- Directly influences the amount of data transmitted and processed, impacting API responsiveness and resource utilization.  \n- Affects downstream workflows that depend on the presence or absence of custom metadata for decision-making or data integrity.  \n**Implementation guidance:**  \n- Employ this flag to optimize performance in scenarios where custom metadata is unnecessary, such as bulk data exports, read-only queries, or environments with stable metadata.  \n- Assess the potential impact on data completeness, validation, auditing, and enrichment to avoid compromising critical business logic.  \n- Clearly communicate the default behavior when the property is omitted to prevent misunderstandings among API consumers.  \n- Enforce boolean validation to maintain consistent and predictable API behavior.  \n- Consider how this setting interacts with caching mechanisms, data synchronization, and other metadata-related preferences to ensure coherent system behavior.  \n**Examples:**  \n- `true`: Skip all custom metadata requests to expedite processing and minimize resource consumption in performance-critical workflows.  \n- `false`: Include custom metadata requests to obtain full metadata details necessary for thorough data handling and validation.  \n**Important notes:**  \n- Skipping custom metadata may lead to incomplete data contexts if downstream processes rely on that metadata for essential functions like validation or auditing.  \n- This setting is most effective when metadata is static, infrequently changed, or irrelevant to the current operation.  \n- Altering this flag can influence caching strategies, data consistency, and synchronization across distributed systems.  \n- Ensure all relevant stakeholders understand the implications of toggling this flag to maintain data integrity and operational correctness.  \n**Dependency chain:**  \n- Relies on the existence and"}},"description":"An object encapsulating the user's customizable settings and options within the NetSuite environment, enabling a highly personalized and efficient user experience. This object encompasses a broad spectrum of configurable preferences including interface layout, language, timezone, notification methods, default currencies, date and time formats, themes, and other user-specific options that directly influence how the system behaves, appears, and interacts with the individual user. Preferences are designed to be flexible, supporting inheritance from role-based or system-wide defaults while allowing users to override settings to suit their unique workflows and requirements. The preferences object supports dynamic retrieval and partial updates, ensuring that changes can be made granularly without affecting unrelated settings, thereby maintaining data integrity and user experience consistency.\n\n**Field behavior**\n- Contains user-specific configuration settings that tailor the NetSuite experience to individual needs and roles.\n- Includes preferences related to UI layout, notification settings, default currencies, date/time formats, language, themes, and other personalization options.\n- Supports dynamic retrieval and partial updates, enabling users to modify individual preferences without overwriting the entire object.\n- Allows inheritance of preferences from role-based or system-wide defaults when user-specific settings are not explicitly defined.\n- Changes to preferences immediately affect the user interface, notification delivery, data presentation, and overall system behavior.\n- Preferences persist across sessions and devices, ensuring a consistent user experience.\n- Supports both simple scalar values and complex nested structures to accommodate diverse configuration needs.\n\n**Implementation guidance**\n- Structure as a nested JSON object with clearly defined and documented sub-properties grouped by categories such as notifications, display settings, localization, and system defaults.\n- Validate all input during updates to ensure data integrity, prevent invalid configurations, and maintain system stability.\n- Implement partial update mechanisms (e.g., PATCH semantics) to allow granular and efficient modifications.\n- Enforce strict access controls to ensure only authorized users can view or modify preferences.\n- Consider versioning the preferences schema to support backward compatibility and future enhancements.\n- Provide comprehensive documentation for each sub-property to facilitate correct usage, integration, and maintenance.\n- Optimize retrieval and update operations for performance, especially in environments with large user bases.\n- Ensure compatibility with role-based access controls and system-wide default settings to maintain coherent preference hierarchies.\n\n**Examples**\n- `{ \"language\": \"en-US\", \"timezone\": \"America/New_York\", \"currency\": \"USD\", \"notifications\": { \"email\": true, \"sms\": false } }`\n- `{ \"dashboardLayout\": \"compact\", \""},"file":{"type":"object","properties":{"name":{"type":"string","description":"The name of the file as it will be identified, stored, and managed within the NetSuite system. This filename serves as the primary unique identifier for the file within its designated folder or context, ensuring precise referencing, retrieval, and manipulation across various NetSuite modules, integrations, and user interfaces. It is critical that the filename is clear, descriptive, and relevant, as it is used not only for backend operations but also for display purposes in reports, dashboards, and file listings. The filename must strictly adhere to NetSuite’s naming conventions and restrictions to prevent conflicts, errors, or access issues, including limitations on character usage and length. Proper naming facilitates efficient file organization, version control, and seamless integration with external systems.\n\n**Field behavior**\n- Acts as the unique identifier for the file within its folder or storage context, preventing duplication.\n- Used extensively in file retrieval, linking, display, and management operations throughout NetSuite.\n- Must be unique within the folder to avoid overwriting or access conflicts.\n- Supports inclusion of standard file extensions to indicate file type and enable appropriate handling.\n- Case sensitivity may vary depending on the operating environment and integration specifics.\n- Represents only the filename without any directory or path information.\n- Changes to the filename after creation can impact existing references or integrations.\n\n**Implementation guidance**\n- Choose a concise, descriptive, and meaningful name to facilitate easy identification and management.\n- Avoid special characters, symbols, or reserved characters (e.g., \\ / : * ? \" < > |) disallowed by NetSuite’s file naming rules.\n- Ensure the filename length complies with NetSuite’s maximum character limit (typically 255 characters).\n- Always include the appropriate file extension (e.g., .pdf, .docx, .png) to clearly denote the file format.\n- Exclude any path separators, folder names, or hierarchical data—only the filename itself should be provided.\n- Adopt consistent naming conventions that support version control, categorization, or organizational standards.\n- Validate filenames programmatically where possible to enforce compliance and prevent errors.\n- Consider the impact of renaming files on existing workflows and references before making changes.\n\n**Examples**\n- \"invoice_2024_06.pdf\"\n- \"project_plan_v2.docx\"\n- \"company_logo.png\"\n- \"financial_report_Q1_2024.xlsx\"\n- \"user_manual_v1.3.pdf\"\n\n**Important notes**\n- The filename must not contain directory or path information; it strictly represents"},"fileType":{"type":"string","description":"The fileType property specifies the precise format or category of the file managed within the NetSuite file system, such as PDF, CSV, JPEG, or other supported types. This designation is essential because it directly affects how the system processes, displays, stores, and interacts with the file throughout its lifecycle. Correctly identifying the fileType ensures that files are rendered correctly in the user interface, processed through appropriate workflows, and subjected to any file-specific restrictions, permissions, or validations. It is typically required when uploading or creating new files to guarantee compatibility, prevent errors during file operations, and maintain data integrity. Additionally, the fileType influences system behaviors such as indexing, searchability, and integration with other NetSuite modules or external systems.\n\n**Field behavior**\n- Defines the file’s format or category, governing system handling, processing, and display.\n- Determines rendering, storage methods, and permissible manipulations within NetSuite.\n- Enables or restricts specific operations based on the file type’s inherent capabilities and limitations.\n- Usually mandatory during file creation or upload to ensure accurate system recognition and handling.\n- Influences validation checks, error handling, and workflow routing during file operations.\n- Affects integration points with other NetSuite features, such as reporting, scripting, or external API interactions.\n\n**Implementation guidance**\n- Validate the fileType against a predefined list of supported NetSuite file types to ensure compatibility and prevent errors.\n- Ensure the fileType accurately reflects the actual content format of the file to avoid processing failures or data corruption.\n- Prefer using standardized MIME types or NetSuite-specific identifiers where applicable for consistency and interoperability.\n- Implement robust error handling to gracefully manage unsupported, unrecognized, or mismatched file types.\n- Consider system versioning, configuration differences, and customizations that may affect supported file types.\n- When changing fileType post-creation, ensure appropriate reprocessing or format conversion to maintain data integrity.\n\n**Examples**\n- \"PDF\" for Portable Document Format files commonly used for documents.\n- \"CSV\" for Comma-Separated Values files frequently used for data exchange.\n- \"JPG\" or \"JPEG\" for image files in JPEG format.\n- \"PLAINTEXT\" for plain text files without formatting.\n- \"EXCEL\" for Microsoft Excel spreadsheet files.\n- \"XML\" for Extensible Markup Language files used in data interchange.\n- \"ZIP\" for compressed archive files.\n\n**Important notes**\n- Consistency between the fileType and the actual file content"},"folder":{"type":"string","description":"The identifier or path of the folder within the NetSuite file cabinet where the file is stored or intended to be stored. This property precisely defines the file's storage location, enabling organized management, categorization, and efficient retrieval within the NetSuite environment. It supports specification either as a unique numeric folder ID or as a string representing the folder path, accommodating both absolute and relative references depending on the API context. Proper assignment ensures files are correctly placed within the hierarchical folder structure, facilitating access control, streamlined file operations, and maintaining organizational consistency.\n\n**Field behavior**\n- Specifies the exact folder location for storing or moving the file within the NetSuite file cabinet.\n- Accepts either a numeric folder ID for unambiguous identification or a string folder path for hierarchical referencing.\n- Mandatory when uploading new files or relocating existing files to define their destination.\n- Optional during file metadata retrieval if folder context is implicit or not required.\n- Updating this property on an existing file triggers relocation to the specified folder.\n- Influences file visibility and access permissions based on folder-level security settings.\n- Supports both absolute and relative folder path formats depending on API usage context.\n\n**Implementation guidance**\n- Confirm the target folder exists and is accessible within the NetSuite file cabinet before assignment.\n- Prefer using the internal numeric folder ID to avoid ambiguity and ensure precise targeting.\n- Support both absolute and relative folder paths where the API context allows.\n- Enforce permission validation to verify that the user or integration has adequate rights to access or modify the folder.\n- Normalize folder paths to comply with NetSuite’s hierarchical structure and naming conventions.\n- Provide informative error responses if the folder is invalid, non-existent, or access is denied.\n- When moving files by updating this property, ensure that any dependent metadata or references are updated accordingly.\n\n**Examples**\n- \"123\" (numeric folder ID representing a specific folder)\n- \"/Documents/Invoices\" (string folder path indicating a nested folder structure)\n- \"456\" (numeric folder ID for a project-specific folder)\n- \"Shared/Marketing/Assets\" (relative folder path within the file cabinet hierarchy)\n\n**Important notes**\n- Folder IDs are unique within the NetSuite account and are the recommended method for precise folder referencing.\n- Folder paths must conform to NetSuite’s naming standards and reflect the correct folder hierarchy.\n- Changing the folder property on an existing file effectively moves the file within the file cabinet.\n- Adequate permissions are required to access or modify the specified folder;"},"folderInternalId":{"type":"string","description":"The internal identifier of the folder within the NetSuite file cabinet where the file is stored. This unique, system-generated integer ID precisely specifies the folder location, enabling accurate file operations such as upload, download, retrieval, and organization. It is essential for associating files with their respective folders and maintaining the hierarchical structure of the file cabinet, ensuring consistent file management and access control.\n\n**Field behavior**\n- Represents a unique, system-assigned internal ID for a folder in the NetSuite file cabinet.\n- Required to specify or retrieve the exact folder location during file management operations.\n- Must correspond to an existing folder within the NetSuite account to ensure valid file placement.\n- Changing this ID effectively moves the file to a different folder within the file cabinet.\n- Permissions on the folder influence access and operations on the associated files.\n- The field is mandatory when creating or updating a file to define its storage location.\n- Supports hierarchical folder structures by linking files to nested folders via their internal IDs.\n\n**Implementation guidance**\n- Validate that the folderInternalId is a valid integer and corresponds to an existing folder before use.\n- Retrieve valid folder internal IDs through NetSuite’s SuiteScript, REST API, or UI to avoid errors.\n- Implement error handling for cases where the folderInternalId is missing, invalid, or points to a restricted folder.\n- Update this ID carefully, as it changes the file’s folder location and may affect file accessibility.\n- Ensure that the user or integration has appropriate permissions to access or modify the target folder.\n- When moving files between folders, confirm that the destination folder supports the file type and intended use.\n- Cache or store folderInternalId values cautiously to prevent stale references in dynamic folder structures.\n\n**Examples**\n- 12345\n- 67890\n- 101112\n\n**Important notes**\n- This ID is distinct from the folder name; it is a system-generated unique identifier.\n- Modifying the folderInternalId moves the file to a different folder, impacting file organization.\n- Folder permissions and sharing settings can restrict or enable file operations based on this ID.\n- Accurate use of this ID is critical for maintaining the integrity of the file cabinet’s folder hierarchy.\n- The folderInternalId cannot be arbitrarily assigned; it must be obtained from NetSuite’s system.\n- Changes to folder structure or deletion of folders may invalidate existing folderInternalId references.\n\n**Dependency chain**\n- Depends on the existence and structure of folders within the NetSuite"},"internalId":{"type":"string","description":"Unique identifier assigned internally to a file within the NetSuite system, serving as the primary key for that file record. This identifier is essential for programmatic referencing and management of the file, enabling precise operations such as retrieval, update, or deletion. The internalId is immutable once assigned and guaranteed to be unique across all file records, ensuring accurate and consistent identification. It is automatically generated by NetSuite upon file creation and is consistently used across both REST and SOAP API endpoints to perform file-related actions. This identifier is critical for maintaining data integrity, enabling seamless integration, and supporting reliable file management workflows within the NetSuite ecosystem.\n\n**Field behavior**\n- Acts as the definitive and immutable identifier for a file within NetSuite.\n- Must be unique for each file record to prevent conflicts.\n- Required for all API operations targeting a specific file, including retrieval, updates, and deletions.\n- Cannot be manually assigned, duplicated, or reassigned to another file.\n- Remains constant throughout the lifecycle of the file record.\n\n**Implementation guidance**\n- Always retrieve the internalId from NetSuite responses after file creation or queries.\n- Use the internalId exclusively for all subsequent API calls involving the file.\n- Avoid any manual assignment or modification of the internalId to prevent inconsistencies.\n- Validate that the internalId conforms to NetSuite’s expected format before use in API calls.\n- Handle errors gracefully when an invalid or non-existent internalId is provided.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"1001\"\n\n**Important notes**\n- Providing an incorrect or non-existent internalId will result in API errors or failed operations.\n- The internalId is distinct and separate from the file name, file path, or any external identifiers.\n- It plays a critical role in maintaining data integrity and ensuring accurate file referencing within NetSuite.\n- The internalId cannot be reused or recycled for different files.\n\n**Dependency chain**\n- Requires the existence of the corresponding file record within NetSuite.\n- Utilized by API methods such as get, update, and delete for file management operations.\n- May be referenced by other NetSuite entities or records that link to the file.\n- Dependent on NetSuite’s internal database and indexing mechanisms for uniqueness and retrieval.\n\n**Technical details**\n- Represented as a string or numeric value depending on the API context.\n- Automatically assigned by NetSuite during the file creation process.\n- Stored as the primary key in the Net"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the unique internal identifier assigned by NetSuite to a specific folder within the file cabinet designated exclusively for storing backup files. This identifier is essential for accurately locating, managing, and organizing backup data within the system’s hierarchical file storage structure. It ensures that all backup operations—such as file creation, modification, and retrieval—are precisely targeted to the correct folder, thereby facilitating reliable data preservation, streamlined backup workflows, and efficient recovery processes.  \n**Field behavior:**  \n- Uniquely identifies the backup folder within the NetSuite file cabinet, eliminating ambiguity in folder selection.  \n- Used to specify or retrieve the exact storage location for backup files during automated or manual backup operations.  \n- Typically assigned and referenced during backup configuration, file management tasks, or programmatic interactions with the NetSuite API.  \n- Enables seamless integration with automated backup processes by directing files to the intended folder without manual intervention.  \n**Implementation guidance:**  \n- Verify that the internal ID corresponds to an existing, active, and accessible folder within the NetSuite file cabinet before use.  \n- Ensure the designated folder has appropriate permissions set to allow creation, modification, and retrieval of backup files.  \n- Avoid using internal IDs of folders that are restricted, archived, deleted, or otherwise unsuitable for active backup storage to prevent errors.  \n- Maintain consistent use of this ID across all API calls, scripts, and backup configurations to uphold data integrity and organizational consistency.  \n**Examples:**  \n- \"12345\" (example of a valid internal folder ID)  \n- \"67890\"  \n- \"34567\"  \n**Important notes:**  \n- This identifier is internal to NetSuite and does not correspond to external folder names, display names, or file system paths.  \n- Changing this ID will redirect backup files to a different folder, which may disrupt existing backup workflows or data organization.  \n- Proper folder permissions are essential to avoid backup failures, access denials, or data loss.  \n- Once set for a given backup configuration, this ID should be treated as immutable unless a deliberate and well-documented change is required.  \n**Dependency chain:**  \n- Requires the target folder to exist within the NetSuite file cabinet and be properly configured for backup storage.  \n- Interacts closely with backup configuration settings, file management APIs, and NetSuite’s permission and security models.  \n- Dependent on NetSuite’s internal folder management system to maintain uniqueness and integrity of folder"}},"description":"Represents a comprehensive file object within the NetSuite system, serving as a fundamental entity for uploading, storing, managing, and referencing a wide variety of file types including documents, images, spreadsheets, audio, video, and other media formats. This object facilitates seamless integration and association of file data with NetSuite records, transactions, custom entities, and workflows, enabling efficient document management, retrieval, and automation across the platform. It supports detailed metadata management such as file name, type, size, folder location, internal identifiers, encoding formats, and timestamps, ensuring precise control, traceability, and auditability of files. The file object can represent both files stored internally in the NetSuite file Cabinet and externally referenced files via URLs or other means, providing flexibility in file handling and access. It enforces platform-specific constraints on file size, type, and permissions to maintain system integrity, security, and compliance with organizational policies. Additionally, it supports versioning and updates to existing files, allowing for effective file lifecycle management within NetSuite.\n\n**Field behavior**\n- Used to specify, upload, retrieve, update, or link files associated with NetSuite records, transactions, custom entities, or workflows.\n- Supports a broad spectrum of file types including PDFs, images (JPEG, PNG, GIF), spreadsheets (Excel, CSV), text documents, audio, video, and other media formats.\n- Can represent files stored internally within the NetSuite file Cabinet or externally referenced files via URLs or other means.\n- Enables attachment of files to records to provide enhanced context, documentation, and audit trails.\n- Manages comprehensive file metadata including internalId, name, fileType, size, folder location, encoding, and timestamps.\n- Supports versioning and updates to existing files where applicable.\n- Enforces platform-specific restrictions on file size, type, and access permissions.\n- Facilitates secure file access and sharing based on user roles and permissions.\n\n**Implementation guidance**\n- Include complete and accurate metadata such as internalId, name, fileType, size, folder, encoding, and timestamps to ensure reliable file referencing and management.\n- Validate file types and sizes against NetSuite’s platform restrictions prior to upload to prevent errors and ensure compliance.\n- Use appropriate encoding methods (e.g., base64) for file content during API transmission to maintain data integrity.\n- Employ multipart/form-data encoding for file uploads when required by the API specifications.\n- Ensure users have the necessary roles and permissions to upload, retrieve, or modify files"}}},"NetsuiteDistributed":{"type":"object","description":"Configuration for NetSuite Distributed (SuiteApp 2.0) import operations.\nThis is the primary sub-schema for NetSuiteDistributedImport adaptorType.\nThe API field name is \"netsuite_da\".\n\n**Key fields**\n- operation: REQUIRED — the NetSuite operation (add, update, addupdate, etc.)\n- recordType: REQUIRED — the NetSuite record type (e.g., \"customer\", \"salesOrder\")\n- mapping: field mappings from source data to NetSuite fields\n- internalIdLookup: how to find existing records for update/addupdate/delete\n- restletVersion: defaults to \"suiteapp2.0\", rarely needs to be changed\n","properties":{"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"The NetSuite operation to perform. REQUIRED. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create new records only.\n- \"update\" — Update existing records. Requires internalIdLookup.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. Most common.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete records. Requires internalIdLookup.\n\n**Guidance**\n- Default to \"addupdate\" when user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when user says \"create\" or \"insert\" without mentioning updates.\n- For \"update\", \"addupdate\", and \"delete\", you MUST also set internalIdLookup.\n\n**Important**\nThis field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\nDo NOT send {\"type\": \"addupdate\"} — that causes a Cast error.\n"},"recordType":{"type":"string","description":"The NetSuite record type to import into. REQUIRED.\n\nMust match a valid NetSuite record type identifier.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"purchaseOrder\"\n- \"vendorBill\"\n- \"itemFulfillment\"\n- \"employee\"\n- \"inventoryItem\"\n- \"customrecord_myrecord\" (custom record types)\n"},"recordIdentifier":{"type":"string","description":"Custom record identifier, used to identify the specific record type when\nimporting into custom record types.\n"},"recordTypeId":{"type":"string","description":"The internal record type ID. Used for custom record types in NetSuite where\nthe numeric ID is needed in addition to the recordType string.\n"},"restletVersion":{"type":"string","enum":["suitebundle","suiteapp1.0","suiteapp2.0"],"description":"The version of the NetSuite RESTlet to use. This is a plain STRING, NOT an object.\n\nDefaults to \"suiteapp2.0\" when useSS2Restlets is true, \"suitebundle\" otherwise.\nAlmost always \"suiteapp2.0\" for modern integrations — rarely needs to be explicitly set.\n\nIMPORTANT: Set as a plain string value, e.g. \"suiteapp2.0\"\nDo NOT send {\"type\": \"suiteapp2.0\"} — that causes a Cast error.\n"},"useSS2Restlets":{"type":"boolean","description":"Whether to use SuiteScript 2.0 RESTlets. Defaults to true for modern integrations.\nWhen true, restletVersion defaults to \"suiteapp2.0\".\n"},"missingOrCorruptedDAConfig":{"type":"boolean","description":"Flag indicating whether the Distributed Adaptor configuration is missing or corrupted.\nSet by the system — do not set manually.\n"},"batchSize":{"type":"number","description":"Number of records to process per batch. Controls how many records are sent\nto NetSuite in a single API call. Typical values: 50-200.\n"},"internalIdLookup":{"type":"object","description":"Configuration for looking up existing NetSuite records by internal ID.\nREQUIRED when operation is \"update\", \"addupdate\", or \"delete\".\n\nDefines how to find existing records in NetSuite to match against incoming data.\n","properties":{"extract":{"type":"string","description":"The path in the source record to extract the lookup value from.\nExample: \"internalId\" or \"externalId\"\n"},"searchField":{"type":"string","description":"The NetSuite field to search against.\nExample: \"externalId\", \"email\", \"name\", \"tranId\"\n"},"operator":{"type":"string","description":"The comparison operator for the lookup.\nExample: \"is\", \"contains\", \"startswith\"\n"},"expression":{"type":"string","description":"A NetSuite search expression for complex lookup conditions.\nUsed for multi-field or conditional lookups.\n"}}},"hooks":{"type":"object","description":"Script hooks for custom processing at different stages of the import.\nEach hook references a SuiteScript file and function.\n","properties":{"preMap":{"type":"object","description":"Runs before field mapping is applied.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postMap":{"type":"object","description":"Runs after field mapping, before submission to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postSubmit":{"type":"object","description":"Runs after the record is submitted to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}}}},"mapping":{"type":"object","description":"Field mappings that define how source data fields map to NetSuite record fields.\nContains both body-level fields and sublist (line-item) fields.\n","properties":{"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the source record to extract the value from.\nUse dot notation for nested fields (e.g., \"address.city\").\n"},"generate":{"type":"string","description":"The NetSuite field ID to write the value to (e.g., \"companyname\", \"email\", \"subsidiary\").\n"},"hardCodedValue":{"type":"string","description":"A static value to always use instead of extracting from source data.\nMutually exclusive with extract.\n"},"lookupName":{"type":"string","description":"Reference to a lookup defined in netsuite_da.lookups by name."},"dataType":{"type":"string","description":"Data type hint for the field value (e.g., \"string\", \"number\", \"date\", \"boolean\").\n"},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"immutable":{"type":"boolean","description":"When true, this field is only set on record creation, not on updates."},"discardIfEmpty":{"type":"boolean","description":"When true, skip this field mapping if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value (e.g., \"MM/DD/YYYY\", \"ISO8601\").\nUsed to parse date strings from source data.\n"},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value (e.g., \"America/New_York\")."},"subRecordMapping":{"type":"object","description":"Nested mapping for NetSuite subrecord fields (e.g., address, inventory detail)."},"conditional":{"type":"object","description":"Conditional logic for when to apply this field mapping.\n","properties":{"lookupName":{"type":"string","description":"Lookup to evaluate for the condition."},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"],"description":"When to apply this mapping:\n- record_created: only on new records\n- record_updated: only on existing records\n- extract_not_empty: only when extracted value is non-empty\n- lookup_not_empty: only when lookup returns a value\n- lookup_empty: only when lookup returns no value\n- ignore_if_set: skip if field already has a value\n"}}}}},"description":"Body-level field mappings. Each entry maps a source field to a NetSuite body field.\n"},"lists":{"type":"array","items":{"type":"object","properties":{"generate":{"type":"string","description":"The NetSuite sublist ID (e.g., \"item\" for sales order line items,\n\"addressbook\" for address sublists).\n"},"jsonPath":{"type":"string","description":"JSON path in the source data that contains the array of sublist records.\n"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the sublist record to extract the value from."},"generate":{"type":"string","description":"NetSuite sublist field ID to write to."},"hardCodedValue":{"type":"string","description":"Static value for this sublist field."},"lookupName":{"type":"string","description":"Reference to a lookup by name."},"dataType":{"type":"string","description":"Data type hint for the field value."},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"isKey":{"type":"boolean","description":"Whether this field is a key field for matching existing sublist lines."},"immutable":{"type":"boolean","description":"Only set on record creation, not updates."},"discardIfEmpty":{"type":"boolean","description":"Skip if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value."},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value."},"subRecordMapping":{"type":"object","description":"Nested mapping for subrecord fields within the sublist."},"conditional":{"type":"object","properties":{"lookupName":{"type":"string"},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"]}}}}},"description":"Field mappings for each column in the sublist."}}},"description":"Sublist (line-item) mappings. Each entry maps source data to a NetSuite sublist.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique name for this lookup, referenced by lookupName in field mappings.\n"},"recordType":{"type":"string","description":"NetSuite record type to search (e.g., \"customer\", \"item\").\n"},"searchField":{"type":"string","description":"NetSuite field to search against (e.g., \"email\", \"externalId\", \"name\").\n"},"resultField":{"type":"string","description":"Field from the lookup result to return (e.g., \"internalId\").\n"},"expression":{"type":"string","description":"NetSuite search expression for complex lookup conditions.\n"},"operator":{"type":"string","description":"Comparison operator (e.g., \"is\", \"contains\", \"startswith\").\n"},"includeInactive":{"type":"boolean","description":"Whether to include inactive records in lookup results."},"useDefaultOnMultipleMatches":{"type":"boolean","description":"When true, use the default value if multiple records match the lookup."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results (key-value pairs)."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup definitions used to resolve reference values from NetSuite.\nReferenced by name from field mappings via the lookupName property.\n"},"rawOverride":{"type":"object","description":"Raw override object for advanced use cases. When useRawOverride is true,\nthis object is sent directly to the NetSuite API, bypassing normal mapping.\n"},"useRawOverride":{"type":"boolean","description":"When true, uses rawOverride instead of the normal mapping configuration.\n"},"isMigrated":{"type":"boolean","description":"System flag indicating whether this import was migrated from a legacy format.\n"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"When true, silently skip read-only fields instead of raising errors."},"warningAsError":{"type":"boolean","description":"When true, treat NetSuite warnings as errors that stop the import."},"skipCustomMetadataRequests":{"type":"boolean","description":"When true, skip fetching custom field metadata to improve performance."}},"description":"Import behavior preferences that control how NetSuite handles the import operation.\n"},"retryUpdateAsAdd":{"type":"boolean","description":"When true, if an update fails because the record doesn't exist, automatically\nretry as an add operation. Useful for initial syncs where records may not exist yet.\n"},"isFileProvider":{"type":"boolean","description":"Whether this import handles files in the NetSuite File Cabinet.\n"},"file":{"type":"object","description":"File cabinet configuration for file-based imports into NetSuite.\n","properties":{"name":{"type":"string","description":"Filename for the file in NetSuite File Cabinet."},"fileType":{"type":"string","description":"NetSuite file type (e.g., \"PDF\", \"CSV\", \"PLAINTEXT\", \"EXCEL\", \"XML\").\n"},"folder":{"type":"string","description":"Folder path or name in the NetSuite File Cabinet."},"folderInternalId":{"type":"string","description":"Internal ID of the target folder in NetSuite File Cabinet."},"internalId":{"type":"string","description":"Internal ID of an existing file to update."},"backupFolderInternalId":{"type":"string","description":"Internal ID of a backup folder for file versioning."}}},"customFieldMetadata":{"type":"object","description":"Metadata about custom fields on the target record type.\nPopulated by the system from NetSuite metadata — do not set manually.\n"}}},"Rdbms":{"type":"object","description":"Configuration for RDBMS import operations. Used for database imports into SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, and other relational databases.\n\n**Query type determines which fields are required**\n\n| queryType          | Required fields                | Do NOT set        |\n|--------------------|--------------------------------|-------------------|\n| [\"per_record\"]     | query (array of SQL strings)   | bulkInsert        |\n| [\"per_page\"]       | query (array of SQL strings)   | bulkInsert        |\n| [\"bulk_insert\"]    | bulkInsert object              | query             |\n| [\"bulk_load\"]      | bulkLoad object                | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"MERGE INTO target USING ...\"]\nWrong: \"MERGE INTO target ...\"\nWrong: [{\"query\": \"MERGE INTO target ...\"}]\n","properties":{"lookups":{"type":["string","null"],"description":"A collection of predefined reference data sets or key-value pairs designed to standardize, validate, and streamline input values across the application. These lookup entries serve as a centralized repository of commonly used values, enabling consistent data entry, minimizing errors, and facilitating efficient data retrieval and processing. They are essential for populating user interface elements such as dropdown menus, autocomplete fields, and for enforcing validation rules within business logic. Lookup data can be static or dynamically updated, and may support localization to accommodate diverse user bases. Additionally, lookups often include metadata such as descriptions, effective dates, and status indicators to provide context and support lifecycle management. They play a critical role in maintaining data integrity, enhancing user experience, and ensuring interoperability across different system modules and external integrations.\n\n**Field behavior**\n- Contains structured sets of reference data including codes, labels, enumerations, or mappings.\n- Drives UI components by providing selectable options and autocomplete suggestions.\n- Ensures data consistency by standardizing input values across different modules.\n- Supports both static and dynamic updates to reflect changes in business requirements.\n- May include metadata like descriptions, effective dates, or status indicators for enhanced context.\n- Facilitates localization and internationalization to support multiple languages and regions.\n- Enables validation logic by restricting inputs to predefined acceptable values.\n- Supports versioning to track changes and maintain historical data integrity.\n\n**Implementation guidance**\n- Organize lookup data as dictionaries, arrays, or database tables for efficient access and management.\n- Implement caching strategies to optimize performance and reduce redundant data retrieval.\n- Enforce uniqueness and relevance of lookup entries within their specific contexts.\n- Provide robust mechanisms for updating, versioning, and extending lookup data without disrupting system stability.\n- Incorporate localization and internationalization support for user-facing lookup values.\n- Secure sensitive lookup data with appropriate access controls and auditing.\n- Design APIs or interfaces for easy retrieval and management of lookup data.\n- Ensure synchronization of lookup data across distributed systems or microservices.\n\n**Examples**\n- Country codes and names: {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n- Status codes representing entity states: {\"1\": \"Active\", \"2\": \"Inactive\", \"3\": \"Pending\"}\n- Product categories for e-commerce platforms: {\"ELEC\": \"Electronics\", \"FASH\": \"Fashion\", \"HOME\": \"Home & Garden\"}\n- Payment methods: {\"CC\": \"Credit Card\", \"PP\": \"PayPal\", \"BT\":"},"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"], [\"per_page\"], or [\"first_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars to inject values from incoming records. RDBMS queries REQUIRE the `record.` prefix and triple braces `{{{ }}}`.\n\n**Format — array of strings**\nThis field is an ARRAY OF STRINGS, not a single string and not an array of objects.\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n- WRONG: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]  (missing record. prefix — will produce empty values)\n\n**Critical:** RDBMS HANDLEBARS SYNTAX\n- MUST use `record.` prefix: `{{{record.fieldName}}}`, NOT `{{{fieldName}}}`\n- MUST use triple braces `{{{ }}}` for value references — in RDBMS context, `{{record.field}}` outputs `'value'` (wrapped in single quotes), while `{{{record.field}}}` outputs `value` (raw). Triple braces give you control over quoting in your SQL.\n- Block helpers (`{{#each}}`, `{{#if}}`, `{{/each}}`) use double braces as normal.\n- Nested fields: `{{{record.properties.email}}}`\n\n**Examples**\n- INSERT: [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{{record.quantity}}} WHERE sku = '{{{record.sku}}}'\"]\n- UPSERT (MySQL): [\"INSERT INTO users (id, name) VALUES ({{{record.id}}}, '{{{record.name}}}') ON DUPLICATE KEY UPDATE name = '{{{record.name}}}'\"]\n- MERGE (Snowflake): [\"MERGE INTO target USING (SELECT '{{{record.email}}}' AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = '{{{record.name}}}' WHEN NOT MATCHED THEN INSERT (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{{record.name}}}'\n- Numbers: no quotes — {{{record.quantity}}}\n"},"queryType":{"type":["array","null"],"items":{"type":"string","enum":["INSERT","UPDATE","bulk_insert","per_record","per_page","first_page","bulk_load"]},"description":"Specifies the execution strategy for the SQL operation.\n\n**CRITICAL: This field DETERMINES whether `query` or `bulkInsert` is required.**\n\n**Preference order (use the highest-performance option the operation supports)**\n\n1. **`[\"bulk_load\"]`** — fastest. Stages data as a file then loads via COPY/bulk mechanism. Supports upsert via `primaryKeys`, and custom merge/ignore logic via `overrideMergeQuery`. **Currently Snowflake and NSAW.**\n2. **`[\"bulk_insert\"]`** — batch INSERT via multi-row VALUES. No upsert/ignore logic. All RDBMS types.\n3. **`[\"per_page\"]`** — one SQL statement per page of records. All RDBMS types.\n4. **`[\"per_record\"]`** — one SQL statement per record. Full control over individual record SQL. All RDBMS types.\n\nAlways prefer the highest option that satisfies the operation. `bulk_load` handles INSERT, UPSERT, and custom merge logic. `bulk_insert` is best for pure INSERTs when `bulk_load` isn't available. `per_page` handles custom SQL at batch level. `per_record` gives full control over individual record SQL.\n\n**Decision tree (Follow in order)**\n\n**STEP 1: Does the database support bulk_load?**\n- If Snowflake, NSAW, or another database with bulk_load support:\n  → **USE `[\"bulk_load\"]`** with:\n    - `bulkLoad.tableName` (required)\n    - `bulkLoad.primaryKeys` for upsert/merge (auto-generates MERGE)\n    - `bulkLoad.overrideMergeQuery: true` for custom SQL (ignore-existing, conditional updates, multi-table ops). Override SQL references `{{import.rdbms.bulkLoad.preMergeTemporaryTable}}` for the staging table.\n    - No `primaryKeys` for pure INSERT (auto-generates INSERT)\n\n**STEP 2: Database does NOT support bulk_load**\n- **Pure INSERT**: → **USE `[\"bulk_insert\"]`** (requires `bulkInsert` object)\n- **UPDATE / UPSERT / custom matching logic**: → **USE `[\"per_page\"]`** (requires `query` field). Only use `[\"per_record\"]` if the logic truly cannot be expressed as a batch operation.\n\n**Critical relationship to other fields**\n\n| queryType        | REQUIRES         | DO NOT SET       |\n|------------------|------------------|------------------|\n| `[\"per_record\"]` | `query` field    | `bulkInsert`     |\n| `[\"bulk_insert\"]`| `bulkInsert` obj | `query`          |\n\n**Do not use**\n- `[\"INSERT\"]` or `[\"UPDATE\"]` as standalone values\n- Multiple types combined\n\n**Examples**\n\n**Insert with ignoreExisting (MUST use per_record)**\nPrompt: \"Import customers, ignore existing based on email\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n```\n\n**Pure insert (no duplicate checking)**\nPrompt: \"Insert all records into users table\"\n```json\n\"queryType\": [\"bulk_insert\"],\n\"bulkInsert\": { ... }\n```\n\n**Update Operation**\nPrompt: \"Update inventory counts\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"UPDATE inventory SET qty = {{{record.qty}}} WHERE sku = '{{{record.sku}}}'\"]\n```\n\n**Run Once Per Page**\nPrompt: \"Execute this stored procedure for each page of results\"\n```json\n\"queryType\": [\"per_page\"]\n```\n**per_page uses a different Handlebars context than per_record.** The context is the entire batch (`batch_of_records`), not a single `record.` object. Loop through with `{{#each batch_of_records}}...{{{record.fieldName}}}...{{/each}}`.\n\n**Run Once at Start**\nPrompt: \"Run this cleanup script before processing\"\n```json\n\"queryType\": [\"first_page\"]\n```"},"bulkInsert":{"type":"object","description":"**CRITICAL: This object is ONLY used when `queryType` is `[\"bulk_insert\"]`.**\n\n✅ **SET THIS** when:\n- `queryType` is `[\"bulk_insert\"]`\n- Pure INSERT operation with NO ignoreExisting/duplicate checking\n\n❌ **DO NOT SET THIS** when:\n- `queryType` is `[\"per_record\"]` (use `query` field instead)\n- Operation involves UPDATE, UPSERT, or ignoreExisting logic\n\nIf the prompt mentions \"ignore existing\", \"skip duplicates\", or \"match on field\",\nuse `queryType: [\"per_record\"]` with the `query` field instead of this object.","properties":{"tableName":{"type":"string","description":"The name of the database table into which the bulk insert operation will be executed. This value must correspond to a valid, existing table within the target relational database management system (RDBMS). It serves as the primary destination for inserting multiple rows of data efficiently in a single operation. The table name can include schema or namespace qualifiers if supported by the database (e.g., \"schemaName.tableName\"), allowing precise targeting within complex database structures. Proper validation and sanitization of this value are essential to ensure the operation's success and to prevent SQL injection or other security vulnerabilities.\n\n**Field behavior**\n- Specifies the exact destination table for the bulk insert operation.\n- Must correspond to an existing table in the database schema.\n- Case sensitivity and naming conventions depend on the underlying RDBMS.\n- Supports schema-qualified names where applicable.\n- Used directly in the SQL `INSERT INTO` statement.\n- Influences how data is mapped and inserted during the bulk operation.\n\n**Implementation guidance**\n- Verify the existence and accessibility of the table in the target database before execution.\n- Ensure compliance with the RDBMS naming rules, including reserved keywords, allowed characters, and maximum length.\n- Support and correctly handle schema-qualified table names, respecting database-specific syntax.\n- Sanitize and validate input rigorously to prevent SQL injection and other security risks.\n- Apply appropriate quoting or escaping mechanisms based on the RDBMS (e.g., backticks for MySQL, double quotes for PostgreSQL).\n- Consider the impact of case sensitivity, especially when dealing with quoted identifiers.\n\n**Examples**\n- \"users\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"sales_data_2024\"\n- \"dbo.CustomerRecords\"\n\n**Important notes**\n- The target table must exist and have the appropriate schema and permissions to accept bulk inserts.\n- Incorrect, misspelled, or non-existent table names will cause the operation to fail.\n- Case sensitivity varies by database; for example, PostgreSQL treats quoted identifiers as case-sensitive.\n- Avoid using unvalidated dynamic input to mitigate security vulnerabilities.\n- Schema qualifiers should be used consistently to avoid ambiguity in multi-schema environments.\n\n**Dependency chain**\n- Relies on the database connection and authentication configuration.\n- Interacts with other bulkInsert parameters such as column mappings and data payload.\n- May be influenced by transaction management, locking mechanisms, and database constraints during the insert.\n- Dependent on the database's metadata for validation and existence checks.\n\n**Techn**"},"batchSize":{"type":"string","description":"The number of records to be inserted into the database in a single batch during a bulk insert operation. This parameter is crucial for optimizing the performance and efficiency of bulk data loading by controlling how many records are grouped together before being sent to the database. Proper tuning of batchSize balances memory consumption, transaction overhead, and throughput, enabling the system to handle large volumes of data efficiently without overwhelming resources or causing timeouts. Adjusting batchSize directly impacts transaction size, network utilization, error handling granularity, and recovery strategies, making it essential to tailor this value based on the specific database capabilities, system resources, and workload characteristics.\n\n**Field behavior**\n- Specifies the exact count of records processed and committed in a single batch during bulk insert operations.\n- Determines the frequency and size of database transactions, influencing overall throughput, latency, and system responsiveness.\n- Larger batch sizes can improve throughput but may increase memory usage, transaction duration, and risk of timeouts or locks.\n- Smaller batch sizes reduce memory footprint and transaction time but may increase the total number of transactions and associated overhead.\n- Defines the scope of error handling, as failures typically affect only the current batch, allowing for partial retries or rollbacks.\n- Controls the granularity of commit points, impacting rollback, recovery, and consistency strategies in case of failures.\n\n**Implementation guidance**\n- Choose a batch size that respects the database’s transaction limits, available system memory, and network conditions.\n- Conduct benchmarking and load testing with different batch sizes to identify the optimal balance for your environment.\n- Monitor system performance continuously and adjust batchSize dynamically if supported, to adapt to varying workloads.\n- Implement comprehensive error handling to manage partial batch failures, including retry logic or compensating transactions.\n- Verify compatibility with database drivers, ORM frameworks, or middleware, which may impose constraints or optimizations on batch sizes.\n- Consider the size, complexity, and serialization overhead of individual records, as larger or more complex records may necessitate smaller batches.\n- Factor in network latency and bandwidth to optimize data transfer efficiency and reduce potential bottlenecks.\n\n**Examples**\n- 1000: Suitable for moderate bulk insert operations, balancing speed and resource consumption effectively.\n- 50000: Ideal for high-throughput environments with ample memory and finely tuned database configurations.\n- 100: Appropriate for systems with limited memory or where minimizing transaction size and duration is critical.\n- 5000: A common default batch size providing a good compromise between performance and resource usage.\n\n**Important**"}}},"bulkLoad":{"type":"object","properties":{"tableName":{"type":"string","description":"The name of the target table within the relational database management system (RDBMS) where the bulk load operation will insert, update, or merge data. This property specifies the exact destination table for the bulk data operation and must correspond to a valid, existing table in the database schema. It can include schema qualifiers if supported by the database (e.g., schema.tableName), and must adhere to the naming conventions and case sensitivity rules of the target RDBMS. Proper specification of this property is critical to ensure data is loaded into the correct location without errors or unintended data modification. Accurate definition of the table name helps maintain data integrity and prevents operational failures during bulk load processes.\n\n**Field behavior**\n- Identifies the specific table in the database to receive the bulk-loaded data.\n- Must reference an existing table accessible with the current database credentials.\n- Supports schema-qualified names where applicable (e.g., schema.tableName).\n- Used as the target for insert, update, or merge operations during bulk load.\n- Case sensitivity and naming conventions depend on the underlying database system.\n- Determines the scope and context of the bulk load operation within the database.\n- Influences transactional behavior, locking, and concurrency controls during the operation.\n\n**Implementation guidance**\n- Verify the target table exists and the executing user has sufficient permissions for data modification.\n- Ensure the table name respects the case sensitivity and naming rules of the RDBMS.\n- Avoid reserved keywords or special characters unless properly escaped or quoted according to database requirements.\n- Support fully qualified names including schema or database prefixes if required to avoid ambiguity.\n- Validate compatibility between the table schema and the incoming data to prevent type mismatches or constraint violations.\n- Consider transactional behavior, locking, and concurrency implications on the target table during bulk load.\n- Test the bulk load operation in a development or staging environment to confirm correct table targeting.\n- Implement error handling to catch and report issues related to invalid or inaccessible table names.\n\n**Examples**\n- \"customers\"\n- \"sales_data_2024\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"dbo.EmployeeRecords\"\n\n**Important notes**\n- Incorrect or misspelled table names will cause the bulk load operation to fail.\n- The target table schema must be compatible with the incoming data structure to avoid errors.\n- Bulk loading may overwrite, append, or merge data depending on the load mode; ensure this aligns with business requirements.\n- Some databases require quoting or escaping of table names, especially if they"},"primaryKeys":{"type":["string","null"],"description":"primaryKeys specifies the list of column names that uniquely identify each record in the target relational database table during a bulk load operation. These keys are essential for maintaining data integrity by enforcing uniqueness constraints and enabling precise identification of rows for operations such as inserts, updates, upserts, or conflict resolution. Properly defining primaryKeys ensures that duplicate records are detected and handled appropriately, preventing data inconsistencies and supporting efficient data merging processes. This property supports both single-column and composite primary keys, requiring the exact column names as defined in the target schema, and plays a critical role in guiding the bulk load mechanism to correctly match and manipulate records based on their unique identifiers.  \n**Field behavior:**  \n- Defines one or more columns that collectively serve as the unique identifier for each row in the table.  \n- Enforces uniqueness constraints during bulk load operations to prevent duplicate entries.  \n- Facilitates detection and resolution of conflicts or duplicates during data insertion or updating.  \n- Influences the behavior of upsert, merge, or conflict resolution mechanisms in the bulk load process.  \n- Ensures that each record can be reliably matched and updated based on the specified keys.  \n- Supports both single-column and composite keys, maintaining the order of columns as per the database schema.  \n**Implementation guidance:**  \n- Include all columns that form the complete primary key, especially for composite keys, maintaining the correct order as defined in the database schema.  \n- Ensure column names exactly match those in the target database, respecting case sensitivity where applicable.  \n- Validate that all specified primary key columns exist in both the source data and the target table schema before initiating the bulk load.  \n- Avoid leaving the primaryKeys list empty when uniqueness enforcement or conflict resolution is required.  \n- Consider the immutability and stability of primary key columns to prevent inconsistencies during repeated load operations.  \n- Confirm that the primary key columns are indexed or constrained appropriately in the target database to optimize performance.  \n**Examples:**  \n- [\"id\"] — single-column primary key.  \n- [\"order_id\", \"product_id\"] — composite primary key consisting of two columns.  \n- [\"user_id\", \"timestamp\"] — composite key combining user identifier and timestamp for uniqueness.  \n- [\"customer_id\", \"account_number\", \"region\"] — multi-column composite primary key.  \n**Important notes:**  \n- Omitting primaryKeys when required can result in data duplication, failed loads, or incorrect conflict handling.  \n- The order of"},"overrideMergeQuery":{"type":"boolean","description":"A custom SQL query string that fully overrides the default merge operation executed during bulk load processes in a relational database management system (RDBMS). This property empowers users to define precise, tailored merge logic that governs how records are inserted, updated, or deleted in the target database table when handling large-scale data operations. By specifying this query, users can implement complex matching conditions, custom conflict resolution strategies, additional filtering criteria, or conditional logic that surpasses the system-generated default behavior, ensuring the bulk load aligns perfectly with specific business rules, data integrity requirements, or performance optimizations.\n\n**Field behavior**\n- When provided, this query completely replaces the standard merge statement used in bulk load operations.\n- Supports detailed customization of merge logic, including custom join conditions, conditional updates, selective inserts, and optional deletes.\n- If omitted, the system automatically generates and executes a default merge query based on the schema, keys, and data mappings.\n- The query must be syntactically compatible with the target RDBMS and support any required parameterization for dynamic data injection.\n- Executed within the transactional context of the bulk load to maintain atomicity, consistency, and rollback capabilities in case of failure.\n- The system expects the query to handle all necessary merge scenarios to avoid partial or inconsistent data states.\n- Overrides apply globally for the bulk load operation, affecting all records processed in that batch.\n\n**Implementation guidance**\n- Validate the custom SQL syntax thoroughly before execution to prevent runtime errors and ensure compatibility with the target RDBMS.\n- Ensure the query comprehensively addresses all required merge operations—insert, update, and optionally delete—to maintain data integrity.\n- Support parameter placeholders or bind variables if the system injects dynamic values during execution, and document their usage clearly.\n- Provide users with clear documentation, templates, or examples outlining the expected query structure, required clauses, and best practices.\n- Implement robust safeguards against SQL injection and other security vulnerabilities when accepting and executing custom queries.\n- Test the custom query extensively in a controlled staging environment to verify correctness, performance, and side effects before deploying to production.\n- Consider transaction isolation levels and locking behavior to avoid deadlocks or contention during bulk load operations.\n- Encourage users to include comprehensive error handling and logging within the query or surrounding execution context to facilitate troubleshooting.\n\n**Examples**\n- A MERGE statement that updates existing records based on a composite primary key and inserts new records when no match is found.\n- A merge query incorporating additional WHERE clauses to exclude certain"}},"description":"Specifies whether bulk loading is enabled for relational database management system (RDBMS) operations, allowing for the efficient insertion of large volumes of data through a single, optimized operation. Enabling bulkLoad significantly improves performance during data import, migration, or batch processing by minimizing the overhead associated with individual row inserts and leveraging database-specific bulk insert mechanisms. This feature is particularly beneficial for initial data loads, large-scale migrations, or periodic batch updates where speed, resource efficiency, and reduced transaction time are critical. Bulk loading may temporarily alter database behavior—such as disabling indexes, constraints, or triggers—to maximize throughput, and often requires elevated permissions and careful management of transactional integrity to ensure data consistency. Proper use of bulkLoad can lead to substantial reductions in processing time and system resource consumption during large data operations.\n\n**Field behavior**\n- When set to true, the system uses optimized bulk loading techniques to insert data in large batches, greatly enhancing throughput and efficiency.\n- When false or omitted, data insertion defaults to standard row-by-row operations, which may be slower and consume more resources.\n- Typically enabled during scenarios involving initial data population, large batch imports, or data migration processes.\n- May temporarily disable or defer enforcement of indexes, constraints, and triggers to improve performance during the bulk load operation.\n- Can affect database locking and concurrency, potentially locking tables or partitions for the duration of the bulk load.\n- Bulk loading operations may bypass certain transactional controls, affecting rollback and error recovery behavior.\n\n**Implementation guidance**\n- Verify that the target RDBMS supports bulk loading and understand its specific syntax, capabilities, and limitations.\n- Assess transaction management implications, as bulk loading may alter or bypass triggers, constraints, and rollback mechanisms.\n- Implement robust error handling and post-load validation to ensure data integrity and consistency after bulk operations.\n- Monitor system resources such as CPU, memory, and I/O throughput during bulk load to avoid performance bottlenecks or outages.\n- Plan for potential impacts on database availability, locking behavior, and concurrent access during bulk load execution.\n- Ensure data is properly staged and preprocessed to meet the format and requirements of the bulk loading mechanism.\n- Coordinate bulk loading with maintenance windows or low-traffic periods to minimize disruption.\n\n**Examples**\n- `bulkLoad: true` — Enables bulk loading to accelerate insertion of large datasets efficiently.\n- `bulkLoad: false` — Disables bulk loading, performing inserts using standard row-by-row methods.\n- `bulkLoad` omitted — Defaults"},"updateLookupName":{"type":"string","description":"Specifies the exact name of the lookup table or entity within the relational database management system (RDBMS) that is targeted for update operations. This property serves as a precise identifier to determine which lookup data set should be modified, ensuring accurate and efficient targeting within the database schema. It typically corresponds to a physical table name or a logical entity name defined in the database and must strictly align with the existing schema to enable successful updates without errors or unintended side effects. Proper use of this property is critical for maintaining data integrity during update operations on lookup data, as it directly influences which records are affected. Accurate specification helps prevent accidental data corruption and supports clear, maintainable database interactions.\n\n**Field behavior**\n- Defines the specific lookup table or entity to be updated during an operation.\n- Acts as a key identifier for the update process to locate the correct data set.\n- Must be provided when performing update operations on lookup data.\n- Influences the scope and effect of the update by specifying the target entity.\n- Changes to this value directly affect which data is modified in the database.\n- Invalid or missing values will cause update operations to fail or target unintended data.\n\n**Implementation guidance**\n- Ensure the value exactly matches the existing lookup table or entity name in the database schema.\n- Validate against the RDBMS naming conventions, including allowed characters, length restrictions, and case sensitivity.\n- Maintain consistent naming conventions across the application to prevent ambiguity and errors.\n- Account for case sensitivity based on the underlying database system’s configuration and collation settings.\n- Avoid using reserved keywords, special characters, or whitespace that may cause conflicts or syntax errors.\n- Implement validation checks to catch misspellings or invalid names before executing update operations.\n- Incorporate error handling to manage cases where the specified lookup name does not exist or is inaccessible.\n\n**Examples**\n- \"country_codes\"\n- \"user_roles\"\n- \"product_categories\"\n- \"status_lookup\"\n- \"department_list\"\n\n**Important notes**\n- Providing an incorrect or misspelled name will result in failed update operations or runtime errors.\n- This field must not be left empty when an update operation is intended.\n- It is only relevant and required when performing update operations on lookup tables or entities.\n- Changes to this value should be carefully managed to avoid unintended data modifications or corruption.\n- Ensure appropriate permissions exist for updating the specified lookup entity to prevent authorization failures.\n- Consistent use of this property across environments (development, staging, production) is essential"},"updateExtract":{"type":"string","description":"Specifies the comprehensive configuration and parameters for updating an existing data extract within a relational database management system (RDBMS). This property defines how the extract's data is modified, supporting various update strategies such as incremental updates, full refreshes, conditional changes, or merges. It ensures that the extract remains accurate, consistent, and aligned with business rules and data integrity requirements by controlling the scope, method, and transactional behavior of update operations. Additionally, it allows for fine-grained control through filters, timestamps, and criteria to selectively update portions of the extract, while managing concurrency and maintaining data consistency throughout the process.\n\n**Field behavior**\n- Determines the update strategy applied to an existing data extract, including incremental, full refresh, conditional, or merge operations.\n- Controls how new data interacts with existing extract data—whether by overwriting, appending, or merging.\n- Supports selective updates using filters, timestamps, or conditional criteria to target specific subsets of data.\n- Manages transactional integrity to ensure updates are atomic, consistent, isolated, and durable (ACID-compliant).\n- Coordinates update execution to prevent conflicts, data corruption, or partial updates.\n- Enables configuration of error handling, logging, and rollback mechanisms during update processes.\n- Handles concurrency control to avoid race conditions and ensure data consistency in multi-user environments.\n- Allows scheduling and triggering of update operations based on time, events, or external signals.\n\n**Implementation guidance**\n- Ensure update configurations comply with organizational data governance, security, and integrity policies.\n- Validate all input parameters and update conditions to avoid inconsistent or partial data modifications.\n- Implement robust transactional support to allow rollback on failure and maintain data consistency.\n- Incorporate detailed logging and error reporting to facilitate monitoring, auditing, and troubleshooting.\n- Optimize update methods based on data volume, change frequency, and performance requirements, balancing between incremental and full refresh approaches.\n- Tailor update logic to leverage specific capabilities and constraints of the target RDBMS, including locking and concurrency controls.\n- Coordinate update timing and execution with downstream systems and data consumers to minimize disruption.\n- Design update processes to be idempotent where possible to support safe retries and recovery.\n- Consider the impact of update latency on data freshness and downstream analytics.\n\n**Examples**\n- Configuring an incremental update that uses a last-modified timestamp column to append only new or changed records to the extract.\n- Defining a full refresh update that completely replaces the existing extract data with a newly extracted dataset.\n- Setting up"},"ignoreLookupName":{"type":"string","description":"Specifies whether the lookup name should be ignored during relational database management system (RDBMS) operations, such as query generation, data retrieval, or relationship resolution. When enabled, this flag instructs the system to bypass the use of the lookup name, which can alter how relationships, joins, or references are resolved within database queries. This behavior is particularly useful in scenarios where the lookup name is redundant, introduces unnecessary complexity, or causes performance overhead. Additionally, it allows for alternative identification methods to be prioritized over the lookup name, enabling more flexible or optimized query strategies.\n\n**Field behavior**\n- When set to true, the system excludes the lookup name from all relevant database operations, including query construction, join conditions, and filtering criteria.\n- When false or omitted, the lookup name is actively utilized to resolve references, enforce relationships, and optimize data retrieval.\n- Determines whether explicit lookup names override or supplement default naming conventions, schema-based identifiers, or inferred relationships.\n- Affects how related data is fetched, potentially influencing join strategies, query plans, and lookup optimizations.\n- Impacts the generation of SQL or other query languages by controlling the inclusion of lookup name references.\n\n**Implementation guidance**\n- Enable this flag to enhance query performance by skipping unnecessary lookup name resolution when it is known to be non-essential or redundant.\n- Thoroughly evaluate the impact on data integrity and correctness to ensure that ignoring the lookup name does not result in incomplete, inaccurate, or inconsistent query results.\n- Validate all downstream processes, components, and integrations that depend on lookup names to prevent breaking dependencies or causing data inconsistencies.\n- Consider the underlying database schema design, naming conventions, and relationship mappings before enabling this flag to avoid unintended side effects.\n- Integrate this flag within query builders, ORM layers, data access modules, or middleware to conditionally include or exclude lookup names during query generation and execution.\n- Implement comprehensive testing and monitoring to detect any adverse effects on application behavior or data retrieval accuracy when this flag is toggled.\n\n**Examples**\n- `ignoreLookupName: true` — The system bypasses the lookup name, generating queries without referencing it, which may simplify query logic and improve execution speed.\n- `ignoreLookupName: false` — The lookup name is included in query logic, ensuring that relationships and references are resolved using the defined lookup identifiers.\n- Omission of the property defaults to `false`, meaning lookup names are considered and used unless explicitly ignored.\n\n**Important notes**\n- Ign"},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from the relational database management system (RDBMS) should be entirely skipped. When set to true, the system bypasses the data extraction step, which is particularly useful in scenarios where data extraction is managed externally, has already been completed, or is unnecessary for the current operation. This flag directly controls whether the extraction engine initiates the retrieval of data from the source database, thereby influencing all subsequent stages that depend on the extracted data, such as transformation, validation, and loading. Proper use of this flag ensures flexibility in workflows by allowing integration with external data pipelines or pre-extracted datasets without redundant extraction efforts.\n\n**Field behavior**\n- Determines if the data extraction phase from the RDBMS is executed or omitted.\n- When true, no data is pulled from the source database, effectively skipping extraction.\n- When false or omitted, the extraction process runs normally, retrieving data as configured.\n- Influences downstream processes such as data transformation, validation, and loading that depend on extracted data.\n- Must be explicitly set to true to skip extraction; otherwise, extraction proceeds by default.\n- Impacts the overall data pipeline flow by potentially altering the availability of fresh data.\n\n**Implementation guidance**\n- Default value should be false to ensure extraction occurs unless intentionally overridden.\n- Validate input strictly as a boolean to prevent misconfiguration.\n- Ensure that skipping extraction does not cause failures or data inconsistencies in subsequent pipeline stages.\n- Use this flag in workflows where extraction is handled outside the current system or when working with pre-extracted datasets.\n- Incorporate checks or safeguards to confirm that necessary data is available from alternative sources when extraction is skipped.\n- Log or notify when extraction is skipped to maintain transparency in data processing workflows.\n- Coordinate with other system components to handle scenarios where extraction is bypassed, ensuring smooth pipeline execution.\n\n**Examples**\n- `ignoreExtract: true` — Extraction step is completely bypassed.\n- `ignoreExtract: false` — Extraction step is performed as usual.\n- Field omitted — Defaults to false, so extraction occurs normally.\n- Used in a pipeline where data is pre-loaded from a file or external system, setting `ignoreExtract: true` to avoid redundant extraction.\n\n**Important notes**\n- Setting this flag to true assumes that the system has access to the required data through other means; otherwise, downstream processes may fail or produce incomplete results.\n- Incorrect use can lead to missing data, causing errors or inconsistencies in the data pipeline."}}},"S3":{"type":"object","description":"Configuration for S3 exports","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is physically located, representing the specific geographical area within AWS's global infrastructure that hosts the bucket. This designation directly affects data access latency, availability, redundancy, and compliance with regional data governance, privacy laws, and residency requirements. Selecting the appropriate region is critical for optimizing performance, minimizing costs related to data transfer and storage, and ensuring adherence to legal and organizational policies. The region must be specified using a valid AWS region identifier that accurately corresponds to the bucket's actual location to avoid connectivity issues, authentication failures, and improper routing of API requests. Proper region selection also influences disaster recovery strategies, service availability, and integration with other AWS services, making it a foundational parameter in S3 bucket configuration and management.\n\n**Field behavior**\n- Defines the physical and logical location of the S3 bucket within AWS's global network.\n- Directly impacts data access speed, latency, and throughput.\n- Determines compliance with regional data sovereignty, privacy, and security regulations.\n- Must be a valid AWS region code that matches the bucket’s deployment location.\n- Influences availability zones, fault tolerance, and disaster recovery planning.\n- Affects endpoint URL construction and API request routing.\n- Plays a role in cost optimization by affecting storage pricing and data transfer fees.\n- Governs integration capabilities with other AWS services available in the selected region.\n\n**Implementation guidance**\n- Use official AWS region codes such as \"us-east-1\", \"eu-west-2\", or \"ap-southeast-2\" as defined by AWS.\n- Validate the region value against the current list of supported AWS regions to prevent misconfiguration.\n- Ensure the specified region exactly matches the bucket’s physical location to avoid access and authentication errors.\n- Consider cost implications including storage pricing, data transfer fees, and cross-region replication expenses.\n- When migrating or replicating buckets across regions, update this property accordingly and plan for data transfer and downtime.\n- Verify service availability and feature support in the chosen region before deployment.\n- Regularly review AWS region updates and deprecations to maintain compatibility.\n- Use region-specific endpoints when constructing API requests to ensure proper routing.\n\n**Examples**\n- us-east-1 (Northern Virginia, USA)\n- eu-central-1 (Frankfurt, Germany)\n- ap-southeast-2 (Sydney, Australia)\n- sa-east-1 (São Paulo, Brazil)\n- ca-central-1 (Canada Central)\n\n**Important notes**\n- Incorrect or mismatched region specification"},"bucket":{"type":"string","description":"The name of the Amazon S3 bucket where data or objects are stored or will be stored. This bucket serves as the primary container within the AWS S3 service, uniquely identifying the storage location for your data across all AWS accounts globally. The bucket name must strictly comply with AWS naming conventions, including being DNS-compliant, entirely lowercase, and free of underscores, uppercase letters, or IP address-like formats, ensuring global uniqueness and compatibility. It is essential that the specified bucket already exists and that the user or service has the necessary permissions to access, upload, or modify its contents. Selecting the appropriate bucket, particularly with regard to its AWS region, can significantly impact application performance, cost efficiency, and adherence to data residency and regulatory requirements.\n\n**Field behavior**\n- Specifies the target S3 bucket for all data storage and retrieval operations.\n- Must reference an existing bucket with proper access permissions.\n- Acts as a globally unique identifier within the AWS S3 ecosystem.\n- Immutable after initial assignment; changing the bucket requires specifying a new bucket name.\n- Influences data locality, latency, cost, and compliance based on the bucket’s region.\n\n**Implementation guidance**\n- Validate bucket names against AWS S3 naming rules: lowercase letters, numbers, and hyphens only; no underscores, uppercase letters, or IP address formats.\n- Confirm the bucket’s existence and verify access permissions before performing operations.\n- Consider the bucket’s AWS region to optimize latency, cost, and compliance with data governance policies.\n- Implement comprehensive error handling for scenarios such as bucket not found, access denied, or insufficient permissions.\n- Align bucket selection with organizational policies and regulatory requirements concerning data storage and residency.\n\n**Examples**\n- \"my-app-data-bucket\"\n- \"user-uploads-2024\"\n- \"company-backups-eu-west-1\"\n\n**Important notes**\n- Bucket names are globally unique across all AWS accounts and cannot be reused or renamed once created.\n- Access control is managed through AWS IAM roles, policies, and bucket policies, which must be correctly configured.\n- The bucket’s region selection is critical for optimizing performance, minimizing costs, and ensuring legal compliance.\n- Bucket naming restrictions prohibit uppercase letters, underscores, and IP address-like formats to maintain DNS compliance.\n\n**Dependency chain**\n- Depends on the availability and stability of the AWS S3 service.\n- Requires valid AWS credentials with appropriate permissions to access or modify the bucket.\n- Typically used alongside object keys, region configurations,"},"fileKey":{"type":"string","description":"The unique identifier or path string that specifies the exact location of a file stored within an Amazon S3 bucket. This key acts as the primary reference for locating, accessing, managing, and manipulating a specific file within the storage system. It can be a simple filename or a hierarchical path that mimics folder structures to logically organize files within the bucket. The fileKey is case-sensitive and must be unique within the bucket to prevent conflicts or overwriting. It supports UTF-8 encoded characters and can include delimiters such as slashes (\"/\") to simulate directories, enabling structured file organization. Proper encoding and adherence to AWS naming conventions are essential to ensure seamless integration with S3 APIs and web requests. Additionally, the fileKey plays a critical role in access control, lifecycle policies, and event notifications within the S3 environment.\n\n**Field behavior**\n- Uniquely identifies a file within the S3 bucket namespace.\n- Serves as the primary reference in all file-related operations such as retrieval, update, deletion, and metadata management.\n- Case-sensitive and must be unique to avoid overwriting or conflicts.\n- Supports folder-like delimiters (e.g., slashes) to simulate directory structures for logical organization.\n- Used as a key parameter in all relevant S3 API operations, including GetObject, PutObject, DeleteObject, and CopyObject.\n- Influences access permissions and lifecycle policies when used with prefixes or specific key patterns.\n\n**Implementation guidance**\n- Use a UTF-8 encoded string that accurately reflects the file’s intended path or name.\n- Avoid unsupported or problematic characters; strictly follow S3 key naming conventions to prevent errors.\n- Incorporate logical folder structures using delimiters for better organization and maintainability.\n- Ensure URL encoding when the key is included in web requests or URLs to handle special characters properly.\n- Validate that the key length does not exceed S3’s maximum allowed size (up to 1024 bytes).\n- Consider the impact of key naming on access control policies, lifecycle rules, and event triggers.\n\n**Examples**\n- \"documents/report2024.pdf\"\n- \"images/profile/user123.png\"\n- \"backups/2023/12/backup.zip\"\n- \"videos/2024/events/conference.mp4\"\n- \"logs/2024/06/15/server.log\"\n\n**Important notes**\n- The fileKey is case-sensitive; for example, \"File.txt\" and \"file.txt\" are distinct keys.\n- Changing the fileKey"},"backupBucket":{"type":"string","description":"The name of the Amazon S3 bucket designated for securely storing backup data. This bucket serves as the primary repository for saving copies of critical information, ensuring data durability, integrity, and availability for recovery in the event of data loss, corruption, or disaster. It must be a valid, existing bucket within the AWS environment, configured with appropriate permissions and security settings to facilitate reliable and efficient backup operations. Proper configuration of this bucket—including enabling versioning to preserve historical backup states, applying encryption to protect data at rest, and setting lifecycle policies to manage storage costs and data retention—is essential to optimize backup management, maintain compliance with organizational and regulatory requirements, and control expenses. The bucket should ideally reside in the same AWS region as the backup source to minimize latency and data transfer costs, and may incorporate cross-region replication for enhanced disaster recovery capabilities. Additionally, the bucket should be monitored regularly for access patterns, storage usage, and security compliance to ensure ongoing protection and operational efficiency.\n\n**Field behavior**\n- Specifies the exact S3 bucket where backup data will be written and stored.\n- Must reference a valid, existing bucket within the AWS account and region.\n- Used exclusively during backup processes to save data snapshots or incremental backups.\n- Requires appropriate write permissions to allow backup services to upload data.\n- Typically remains static unless backup storage strategies or requirements change.\n- Supports integration with backup scheduling and monitoring systems.\n- Facilitates data recovery by maintaining backup copies in a secure, durable location.\n\n**Implementation guidance**\n- Verify that the bucket name adheres to AWS S3 naming conventions and is DNS-compliant.\n- Confirm the bucket exists and is accessible by the backup service or user.\n- Set up IAM roles and bucket policies to grant necessary write and read permissions securely.\n- Enable bucket versioning to maintain historical backup versions and support data recovery.\n- Implement lifecycle policies to manage storage costs by archiving or deleting old backups.\n- Apply encryption (e.g., AWS KMS) to protect backup data at rest and meet compliance requirements.\n- Consider enabling cross-region replication for disaster recovery scenarios.\n- Regularly audit bucket permissions and access logs to ensure security compliance.\n- Monitor storage usage and costs to optimize resource allocation and prevent unexpected charges.\n- Align bucket configuration with organizational policies for data retention, security, and compliance.\n\n**Examples**\n- \"my-app-backups\"\n- \"company-data-backup-2024\"\n- \"prod-environment-backup-bucket\"\n- \"s3-backups-region1"},"serverSideEncryptionType":{"type":"string","description":"serverSideEncryptionType specifies the type of server-side encryption applied to objects stored in the S3 bucket, determining how data at rest is securely protected by the storage service. This property defines the encryption algorithm or method used by the server to automatically encrypt data upon storage, ensuring confidentiality, integrity, and compliance with security standards. It accepts predefined values that correspond to supported encryption mechanisms, influencing how data is encrypted, accessed, and managed during storage and retrieval operations. By configuring this property, users can enforce encryption policies that safeguard sensitive information without requiring client-side encryption management, thereby simplifying security administration and enhancing data protection.\n\n**Field behavior**\n- Defines the encryption algorithm or method used by the server to encrypt data at rest.\n- Controls whether server-side encryption is enabled or disabled for stored objects.\n- Accepts specific predefined values corresponding to supported encryption types such as AES256 or aws:kms.\n- Influences data handling during storage and retrieval to maintain data confidentiality and integrity.\n- Applies encryption automatically without requiring client-side encryption management.\n- Determines compliance with organizational and regulatory encryption requirements.\n- Affects how encryption keys are managed, accessed, and audited.\n- Impacts access control and permissions related to encrypted objects.\n- May affect performance and cost depending on the encryption method chosen.\n\n**Implementation guidance**\n- Validate the input value against supported encryption types, primarily \"AES256\" and \"aws:kms\".\n- Ensure the selected encryption type is compatible with the S3 bucket’s configuration, policies, and permissions.\n- When using \"aws:kms\", specify the appropriate AWS KMS key ID if required, and verify key permissions.\n- Gracefully handle scenarios where encryption is not specified, disabled, or set to an empty value.\n- Provide clear and actionable error messages if an unsupported or invalid encryption type is supplied.\n- Consider the impact on performance and cost when enabling encryption, especially with KMS-managed keys.\n- Document encryption settings clearly for users to understand the security posture of stored data.\n- Implement fallback or default behaviors when encryption settings are omitted.\n- Ensure that encryption settings align with audit and compliance reporting requirements.\n- Coordinate with key management policies to maintain secure key lifecycle and rotation.\n- Test encryption settings thoroughly to confirm data is encrypted and accessible as expected.\n\n**Examples**\n- \"AES256\" — Server-side encryption using Amazon S3-managed encryption keys (SSE-S3).\n- \"aws:kms\" — Server-side encryption using AWS Key Management Service-managed keys (SSE-KMS"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Specifies the exact name of the function to be invoked within the wrapper context, enabling dynamic and flexible execution of different functionalities based on the provided value. This property acts as a critical key identifier that directs the wrapper to call a specific function, facilitating modular, reusable, and maintainable code design. The function name must correspond to a valid, accessible, and callable function within the wrapper's scope, ensuring that the intended operation is performed accurately and efficiently. By allowing the function to be selected at runtime, this mechanism supports dynamic behavior, adaptable workflows, and context-sensitive execution paths, enhancing the overall flexibility and scalability of the system.\n\n**Field behavior**\n- Determines which specific function the wrapper will execute during its operation.\n- Accepts a string value representing the exact name of the target function.\n- Must match a valid and accessible function within the wrapper’s execution context.\n- Influences the wrapper’s behavior and output based on the selected function.\n- Supports dynamic assignment to enable flexible and context-dependent function calls.\n- Enables runtime selection of functionality, allowing for versatile and adaptive processing.\n\n**Implementation guidance**\n- Validate that the provided function name exists and is callable within the wrapper environment before invocation.\n- Ensure the function name is supplied as a string and adheres to the naming conventions used in the wrapper’s codebase.\n- Implement robust error handling to gracefully manage cases where the specified function does not exist, is inaccessible, or fails during execution.\n- Sanitize the input to prevent injection attacks, code injection, or other security vulnerabilities.\n- Allow dynamic updates to this property to support runtime changes in function execution without requiring system restarts.\n- Log function invocation attempts and outcomes for auditing and debugging purposes.\n- Consider implementing a whitelist or registry of allowed function names to enhance security and control.\n\n**Examples**\n- \"initialize\"\n- \"processData\"\n- \"renderOutput\"\n- \"cleanupResources\"\n- \"fetchUserDetails\"\n- \"validateInput\"\n- \"exportReport\"\n- \"authenticateUser\"\n\n**Important notes**\n- The function name is case-sensitive and must precisely match the function identifier in the underlying implementation.\n- Misspelled or incorrect function names will cause invocation errors or runtime failures.\n- This property specifies only the function name; any parameters or arguments for the function should be provided separately.\n- The wrapper context must be properly initialized and configured to recognize and execute the specified function.\n- Changes to this property can significantly alter the behavior of the wrapper, so modifications should be managed carefully"},"configuration":{"type":"object","description":"The `configuration` property defines a comprehensive and centralized set of parameters, settings, and operational directives that govern the behavior, functionality, and characteristics of the wrapper component. It acts as the primary container for all customizable options that influence how the wrapper initializes, runs, and adapts to various deployment environments or user requirements. This includes environment-specific configurations, feature toggles, resource limits, integration endpoints, security credentials, logging preferences, and other critical controls. By encapsulating these diverse settings, the configuration property enables fine-grained control over the wrapper’s runtime behavior, supports dynamic adaptability, and facilitates consistent management across different scenarios.\n\n**Field behavior**\n- Encapsulates all relevant configurable options affecting the wrapper’s operation and lifecycle.\n- Supports complex nested structures or key-value pairs to represent hierarchical and interrelated settings.\n- Can be mandatory or optional depending on the wrapper’s design and operational context.\n- Changes to configuration typically influence initialization, runtime behavior, and potentially require component restart or reinitialization.\n- May support dynamic updates if the wrapper is designed for live reconfiguration without downtime.\n- Acts as a single source of truth for operational parameters, ensuring consistency and predictability.\n\n**Implementation guidance**\n- Organize configuration parameters logically, grouping related settings to improve readability and maintainability.\n- Implement robust validation mechanisms to enforce correct data types, value ranges, and adherence to constraints.\n- Provide sensible default values for optional parameters to enhance usability and prevent misconfiguration.\n- Design the configuration schema to be extensible and backward-compatible, allowing seamless future enhancements.\n- Secure sensitive information within the configuration using encryption, secure storage, or exclusion from logs and error messages.\n- Thoroughly document each configuration option, including its purpose, valid values, default behavior, and impact on the wrapper.\n- Consider environment-specific overrides or profiles to facilitate deployment flexibility.\n- Ensure configuration changes are atomic and consistent to avoid partial or invalid states.\n\n**Examples**\n- `{ \"timeout\": 5000, \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"environment\": \"production\", \"apiEndpoint\": \"https://api.example.com\", \"features\": { \"beta\": false } }`\n- Nested objects specifying database connection details, authentication credentials, UI themes, third-party service integrations, or resource limits.\n- Feature flags enabling or disabling experimental capabilities dynamically.\n- Security parameters such as OAuth tokens, encryption keys, or access control lists.\n\n**Important notes**\n- Comprehensive and clear documentation of all"},"lookups":{"type":"object","properties":{"type":{"type":"string","description":"type: >\n  Specifies the category or classification of the lookup within the wrapper context.\n  This property defines the nature or kind of lookup being performed or referenced.\n  **Field behavior**\n  - Determines the specific type of lookup operation or data classification.\n  - Influences how the lookup data is processed or interpreted.\n  - May restrict or enable certain values or options based on the type selected.\n  **Implementation guidance**\n  - Use clear and consistent naming conventions for different types.\n  - Validate the value against a predefined set of allowed types to ensure data integrity.\n  - Ensure that the type aligns with the corresponding lookup logic or data source.\n  **Examples**\n  - \"userRole\"\n  - \"productCategory\"\n  - \"statusCode\"\n  - \"regionCode\"\n  **Important notes**\n  - The type value is critical for correctly resolving and handling lookup data.\n  - Changing the type may affect downstream processing or data retrieval.\n  - Ensure compatibility with other related fields or components that depend on this type.\n  **Dependency chain**\n  - Depends on the wrapper context to provide scope.\n  - Influences the selection and retrieval of lookup values.\n  - May be linked to validation schemas or business logic modules.\n  **Technical details**\n  - Typically represented as a string.\n  - Should conform to a controlled vocabulary or enumeration where applicable.\n  - Case sensitivity may apply depending on implementation."},"_lookupCacheId":{"type":"string","format":"objectId","description":"_lookupCacheId: Identifier used to reference a specific cache entry for lookup operations within the wrapper's lookup mechanism.\n**Field behavior**\n- Serves as a unique identifier for cached lookup data.\n- Used to retrieve or update cached lookup results efficiently.\n- Helps in minimizing redundant lookup operations by reusing cached data.\n- Typically remains constant for the lifespan of the cached data entry.\n**Implementation guidance**\n- Should be generated to ensure uniqueness within the scope of the wrapper's lookups.\n- Must be consistent and stable to correctly reference the intended cache entry.\n- Should be invalidated or updated when the underlying lookup data changes.\n- Can be a string or numeric identifier depending on the caching system design.\n**Examples**\n- \"userRolesCache_v1\"\n- \"productLookupCache_2024_06\"\n- \"regionCodesCache_42\"\n**Important notes**\n- Proper management of _lookupCacheId is critical to avoid stale or incorrect lookup data.\n- Changing the _lookupCacheId without updating the cache content can lead to lookup failures.\n- Should be used in conjunction with cache expiration or invalidation strategies.\n**Dependency chain**\n- Depends on the wrapper.lookups system to manage cached lookup data.\n- May interact with cache storage or memory management components.\n- Influences lookup operation performance and data consistency.\n**Technical details**\n- Typically implemented as a string identifier.\n- May be used as a key in key-value cache stores.\n- Should be designed to avoid collisions with other cache identifiers.\n- May include versioning or timestamp information to manage cache lifecycle."},"extract":{"type":"string","description":"Extract: Specifies the extraction criteria or pattern used to retrieve specific data from a given input within the lookup process.\n**Field behavior**\n- Defines the rules or patterns to identify and extract relevant information from input data.\n- Can be a string, regular expression, or structured query depending on the implementation.\n- Used during the lookup operation to isolate the desired subset of data.\n- May support multiple extraction patterns or conditional extraction logic.\n**Implementation guidance**\n- Ensure the extraction pattern is precise to avoid incorrect or partial data retrieval.\n- Validate the extraction criteria against sample input data to confirm accuracy.\n- Support common pattern formats such as regex for flexible and powerful extraction.\n- Provide clear error handling if the extraction pattern fails or returns no results.\n- Allow for optional extraction parameters to customize behavior (e.g., case sensitivity).\n**Examples**\n- A regex pattern like `\"\\d{4}-\\d{2}-\\d{2}\"` to extract dates in YYYY-MM-DD format.\n- A JSONPath expression such as `$.store.book[*].author` to extract authors from a JSON object.\n- A simple substring like `\"ERROR:\"` to extract error messages from logs.\n- XPath query to extract nodes from XML data.\n**Important notes**\n- The extraction logic directly impacts the accuracy and relevance of the lookup results.\n- Complex extraction patterns may affect performance; optimize where possible.\n- Extraction should handle edge cases such as missing fields or unexpected data formats gracefully.\n- Document supported extraction pattern types and syntax clearly for users.\n**Dependency chain**\n- Depends on the input data format and structure to define appropriate extraction criteria.\n- Works in conjunction with the lookup mechanism that applies the extraction.\n- May interact with validation or transformation steps post-extraction.\n**Technical details**\n- Typically implemented using pattern matching libraries or query language parsers.\n- May require escaping or encoding special characters within extraction patterns.\n- Should support configurable options like multiline matching or case sensitivity.\n- Extraction results are usually returned as strings, arrays, or structured objects depending on context."},"map":{"type":"object","description":"map: >\n  A mapping object that defines key-value pairs used for lookup operations within the wrapper context.\n  **Field behavior**\n  - Serves as a dictionary or associative array for translating or mapping input keys to corresponding output values.\n  - Used to customize or override default lookup values dynamically.\n  - Supports string keys and values, but may also support other data types depending on implementation.\n  **Implementation guidance**\n  - Ensure keys are unique within the map to avoid conflicts.\n  - Validate that values conform to expected formats or types required by the lookup logic.\n  - Consider immutability or controlled updates to prevent unintended side effects during runtime.\n  - Provide clear error handling for missing or invalid keys during lookup operations.\n  **Examples**\n  - {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n  - {\"error404\": \"Not Found\", \"error500\": \"Internal Server Error\"}\n  - {\"env\": \"production\", \"version\": \"1.2.3\"}\n  **Important notes**\n  - The map is context-specific and should be populated according to the needs of the wrapper's lookup functionality.\n  - Large maps may impact performance; optimize size and access patterns accordingly.\n  - Changes to the map may require reinitialization or refresh of dependent components.\n  **Dependency chain**\n  - Depends on the wrapper.lookups context for proper integration.\n  - May be referenced by other properties or methods performing lookup operations.\n  **Technical details**\n  - Typically implemented as a JSON object or dictionary data structure.\n  - Keys and values are usually strings but can be extended to other serializable types.\n  - Should support efficient retrieval, ideally O(1) time complexity for lookups."},"default":{"type":["object","null"],"description":"The default property specifies the fallback or initial value to be used within the wrapper's lookups configuration when no other specific value is provided or applicable. This ensures that the system has a predefined baseline value to operate with, preventing errors or undefined behavior.\n\n**Field behavior**\n- Acts as the fallback value in the lookup process.\n- Used when no other matching lookup value is found.\n- Provides a baseline or initial value for the wrapper configuration.\n- Ensures consistent behavior by avoiding null or undefined states.\n\n**Implementation guidance**\n- Define a sensible and valid default value that aligns with the expected data type and usage context.\n- Ensure the default value is compatible with other dependent fields or processes.\n- Update the default value cautiously, as it impacts the fallback behavior system-wide.\n- Validate the default value during configuration loading to prevent runtime errors.\n\n**Examples**\n- A string value such as \"N/A\" or \"unknown\" for textual lookups.\n- A numeric value like 0 or -1 for numerical lookups.\n- A boolean value such as false when a true/false default is needed.\n- An object or dictionary representing a default configuration set.\n\n**Important notes**\n- The default value should be meaningful and appropriate to avoid misleading results.\n- Overriding the default value may affect downstream logic relying on fallback behavior.\n- Absence of a default value might lead to errors or unexpected behavior in the lookup process.\n- Consider localization or context-specific defaults if applicable.\n\n**Dependency chain**\n- Used by the wrapper.lookups mechanism during value resolution.\n- May influence or be influenced by other lookup-related properties or configurations.\n- Interacts with error handling or fallback logic in the system.\n\n**Technical details**\n- Data type should match the expected type of lookup values (string, number, boolean, object, etc.).\n- Stored and accessed as part of the wrapper's lookup configuration object.\n- Should be immutable during runtime unless explicitly reconfigured.\n- May be serialized/deserialized as part of configuration files or API payloads."},"allowFailures":{"type":"boolean","description":"allowFailures: Indicates whether the system should permit failures during the lookup process without aborting the entire operation.\n**Field behavior**\n- When set to true, individual lookup failures are tolerated, allowing the process to continue.\n- When set to false, any failure in the lookup process causes the entire operation to fail.\n- Helps in scenarios where partial results are acceptable or expected.\n**Implementation guidance**\n- Use this flag to control error handling behavior in lookup operations.\n- Ensure that downstream processes can handle partial or incomplete data if allowFailures is true.\n- Log or track failures when allowFailures is enabled to aid in debugging and monitoring.\n**Examples**\n- allowFailures: true — The system continues processing even if some lookups fail.\n- allowFailures: false — The system stops and reports an error immediately upon a lookup failure.\n**Important notes**\n- Enabling allowFailures may result in incomplete or partial data sets.\n- Use with caution in critical systems where data integrity is paramount.\n- Consider combining with retry mechanisms or fallback strategies.\n**Dependency chain**\n- Depends on the lookup operation within wrapper.lookups.\n- May affect error handling and result aggregation components.\n**Technical details**\n- Boolean value: true or false.\n- Default behavior should be defined explicitly to avoid ambiguity.\n- Should be checked before executing lookup calls to determine error handling flow."},"_id":{"type":"object","description":"Unique identifier for the lookup entry within the wrapper context.\n**Field behavior**\n- Serves as the primary key to uniquely identify each lookup entry.\n- Immutable once assigned to ensure consistent referencing.\n- Used to retrieve, update, or delete specific lookup entries.\n**Implementation guidance**\n- Should be generated using a globally unique identifier (e.g., UUID or ObjectId).\n- Must be indexed in the database for efficient querying.\n- Should be validated to ensure uniqueness within the wrapper.lookups collection.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"a1b2c3d4e5f6789012345678\"\n**Important notes**\n- This field is mandatory for every lookup entry.\n- Changing the _id after creation can lead to data inconsistency.\n- Should not contain sensitive or personally identifiable information.\n**Dependency chain**\n- Used by other fields or services that reference lookup entries.\n- May be linked to foreign keys or references in related collections or tables.\n**Technical details**\n- Typically stored as a string or ObjectId type depending on the database.\n- Must conform to the format and length constraints of the chosen identifier scheme.\n- Should be generated server-side to prevent collisions."},"cLocked":{"type":"object","description":"cLocked indicates whether the item is currently locked, preventing modifications or deletions.\n**Field behavior**\n- Represents the lock status of an item within the system.\n- When set to true, the item is considered locked and cannot be edited or deleted.\n- When set to false, the item is unlocked and available for modifications.\n- Typically used to enforce data integrity and prevent concurrent conflicting changes.\n**Implementation guidance**\n- Should be a boolean value: true or false.\n- Ensure that any operation attempting to modify or delete the item checks the cLocked status first.\n- Update the cLocked status appropriately when locking or unlocking the item.\n- Consider integrating with user permissions to control who can change the lock status.\n**Examples**\n- cLocked: true (The item is locked and cannot be changed.)\n- cLocked: false (The item is unlocked and editable.)\n**Important notes**\n- Locking an item does not necessarily mean it is read-only; it depends on the system's enforcement.\n- The lock status should be clearly communicated to users to avoid confusion.\n- Changes to cLocked should be logged for audit purposes.\n**Dependency chain**\n- May depend on user roles or permissions to set or unset the lock.\n- Other fields or operations may check cLocked before proceeding.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false unless specified otherwise.\n- Stored as part of the lookup or wrapper object in the data model.\n- Should be efficiently queryable to enforce lock checks during operations."}},"description":"A collection of key-value pairs that serve as a centralized reference dictionary to map or translate specific identifiers, codes, or keys into meaningful, human-readable, or contextually relevant values within the wrapper object. This property facilitates data normalization, standardization, and clarity across various components of the application by providing descriptive labels, metadata, or explanatory information associated with each key. It enhances data interpretation, validation, and presentation by enabling consistent and seamless conversion of coded data into understandable formats, supporting both simple and complex hierarchical mappings as needed.\n\n**Field behavior**\n- Functions as a lookup dictionary to resolve codes or keys into descriptive, user-friendly values.\n- Supports consistent data representation and interpretation throughout the application.\n- Commonly accessed during data processing, validation, or UI rendering to enhance readability.\n- May support hierarchical or nested mappings for complex relationships.\n- Typically remains static or changes infrequently but should reflect the latest valid mappings.\n\n**Implementation guidance**\n- Structure as an object, map, or dictionary with unique, well-defined keys and corresponding descriptive values.\n- Ensure keys are consistent, unambiguous, and conform to expected formats or standards.\n- Values should be concise, clear, and informative to facilitate easy understanding by end-users or systems.\n- Support nested or multi-level lookups if the domain requires hierarchical mapping.\n- Validate lookup data integrity regularly to prevent missing, outdated, or incorrect mappings.\n- Consider immutability or concurrency controls if the lookup data is accessed or modified concurrently.\n- Optimize for efficient retrieval, aiming for constant-time (O(1)) lookup performance.\n\n**Examples**\n- Mapping ISO country codes to full country names, e.g., {\"US\": \"United States\", \"FR\": \"France\"}.\n- Translating HTTP status codes to descriptive messages, e.g., {\"200\": \"OK\", \"404\": \"Not Found\"}.\n- Converting product or SKU identifiers to product names within an inventory management system.\n- Mapping error codes to user-friendly error descriptions.\n- Associating role IDs with role names in an access control system.\n\n**Important notes**\n- Keep the lookup collection current to accurately reflect any updates in codes or their meanings.\n- Avoid including sensitive, confidential, or personally identifiable information within lookup values.\n- Monitor and manage the size of the lookup collection to prevent performance degradation.\n- Ensure thread-safety or appropriate synchronization mechanisms if the lookups are mutable and accessed concurrently.\n- Changes to lookup data can have widespread effects on data interpretation and user interface behavior."}}},"Salesforce":{"type":"object","description":"Configuration for Salesforce imports containing operation type, API selection, and object-specific settings.\n\n**IMPORTANT:** This schema defines ONLY the Salesforce-specific configuration properties. Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level.\n\nWhen asked to generate this configuration, return an object with these properties:\n- `operation` (REQUIRED): The Salesforce operation type (insert, update, upsert, etc.)\n- `api` (REQUIRED): The Salesforce API to use (default: \"soap\")\n- `sObjectType`: The Salesforce object type (e.g., \"Account\", \"Contact\", \"Vendor__c\")\n- `idLookup`: Configuration for looking up existing records\n- `upsert`: Configuration for upsert operations (when operation is \"upsert\")\n- And other operation-specific properties as needed","properties":{"lookups":{"type":["string","null"],"description":"A collection of lookup tables or reference data sets that provide standardized, predefined values or options within the application context. These lookup tables enable consistent data entry, validation, and interpretation by defining controlled vocabularies or permissible values for various fields or parameters across the system. They serve as authoritative sources for reference data, ensuring uniformity and reducing errors in data handling and user interactions. Lookup tables typically encompass static or infrequently changing data that supports consistent business logic and user interface behavior across multiple modules or components. By centralizing these reference datasets, the system promotes reusability, simplifies maintenance, and enhances data integrity throughout the application lifecycle.\n\n**Field behavior**\n- Contains multiple named lookup tables, each representing a distinct category or domain of reference data.\n- Used to populate user interface elements such as dropdown menus, autocomplete fields, and selection lists.\n- Supports validation logic by restricting input to predefined permissible values.\n- Typically consists of static or infrequently changing data that underpins consistent data interpretation.\n- May be referenced by multiple components or modules within the application to maintain data consistency.\n- Enables dynamic adaptation of UI and business rules by centralizing reference data management.\n- Facilitates localization by supporting language-specific labels or descriptions where applicable.\n- Often versioned or managed to track changes and ensure backward compatibility.\n\n**Implementation guidance**\n- Structure as a dictionary or map where keys correspond to lookup table names and values are arrays or lists of unique, well-defined entries.\n- Ensure each lookup entry is unique within its table and includes necessary metadata such as codes, descriptions, or display labels.\n- Design for easy updates or extensions to lookup tables without compromising existing data integrity or system stability.\n- Implement caching strategies to optimize performance, especially for client-side consumption.\n- Consider supporting localization and internationalization by including language-specific labels or descriptions where applicable.\n- Incorporate versioning or change management mechanisms to handle updates without disrupting dependent systems.\n- Validate data integrity rigorously during ingestion or updates to prevent duplicates, inconsistencies, or invalid entries.\n- Separate lookup data from business logic to maintain clarity and ease of maintenance.\n- Provide clear documentation for each lookup table to facilitate understanding and correct usage by developers and users.\n- Ensure lookup data is accessible through standardized APIs or interfaces to promote interoperability.\n\n**Examples**\n- A \"countries\" lookup containing standardized country codes (e.g., ISO 3166-1 alpha-2) and their corresponding country names.\n- A \"statusCodes\" lookup defining permissible status values such as \""},"operation":{"type":"string","enum":["insert","update","upsert","upsertpicklistvalues","delete","addupdate"],"description":"Specifies the type of Salesforce operation to perform on records. This determines how data is created, modified, or removed in Salesforce.\n\n**Field behavior**\n- Defines the specific Salesforce action to execute (insert, update, upsert, etc.)\n- Dictates required fields and validation rules for the operation\n- Determines API endpoint and request structure\n- Automatically converted to lowercase before processing\n- Influences response format and error handling\n\n**Operation types**\n\n****insert****\n- **Purpose:** Create new records in Salesforce\n- **Behavior:** Creates new records\n- **Use when:** Creating brand new records\n- **With ignoreExisting: true:** Checks for existing records and skips them (requires idLookup.whereClause)\n- **Without ignoreExisting:** Creates all records without checking for duplicates\n- **Example:** \"Insert new leads from marketing campaign\"\n- **Example with ignoreExisting:** \"Create vendors while ignoring existing vendors\" → operation: \"insert\" + ignoreExisting: true\n\n****update****\n- **Purpose:** Modify existing records in Salesforce\n- **Behavior:** Requires record ID; fails if record doesn't exist\n- **Use when:** Updating known existing records\n- **Example:** \"Update customer status based on order completion\"\n\n****upsert****\n- **Purpose:** Create or update records based on external ID\n- **Behavior:** Updates if external ID matches, creates if not found\n- **Use when:** Syncing data from external systems with external ID field\n- **Example:** \"Upsert customers by External_ID__c field\"\n- **REQUIRES:**\n  - `idLookup.extract` field (maps incoming field to External ID)\n  - `upsert.externalIdField` field (specifies Salesforce External ID field)\n  - Both fields MUST be set for upsert to work\n\n****upsertpicklistvalues****\n- **Purpose:** Create or update picklist values dynamically\n- **Behavior:** Manages picklist value creation/updates\n- **Use when:** Maintaining picklist values from external data\n- **Example:** \"Sync product categories to Salesforce picklist\"\n\n****delete****\n- **Purpose:** Remove records from Salesforce\n- **Behavior:** Requires record ID; records moved to recycle bin\n- **Use when:** Removing obsolete or invalid records\n- **Example:** \"Delete expired opportunities\"\n\n****addupdate****\n- **Purpose:** Create new or update existing records (Celigo-specific)\n- **Behavior:** Similar to upsert but uses different matching logic\n- **Use when:** Upserting without requiring external ID field\n- **Example:** \"Sync contacts, create new or update existing by email\"\n\n**Implementation guidance**\n- Choose operation based on whether records are new, existing, or unknown\n- For **insert**: Ensure records are truly new to avoid errors\n- For **insert with ignoreExisting: true**: MUST set `idLookup.whereClause` to check for existing records\n- For **update/delete**: MUST set `idLookup.whereClause` to identify records\n- For **upsert**: MUST also include the `upsert` object with `externalIdField` property\n- For **upsert**: MUST also set `idLookup.extract` to map incoming field\n- For **addupdate**: MUST set `idLookup.whereClause` to define matching criteria\n- Use **upsertpicklistvalues** only for picklist management scenarios\n\n**Common patterns**\n- **\"Create X while ignoring existing X\"** → operation: \"insert\" + api: \"soap\" + idLookup.whereClause\n- **\"Create or update X\"** → operation: \"upsert\" + api: \"soap\" + upsert.externalIdField + idLookup.extract\n- **\"Update X\"** → operation: \"update\" + api: \"soap\" + idLookup.whereClause\n- **\"Sync X\"** → operation: \"addupdate\" + api: \"soap\" + idLookup.whereClause\n- **ALWAYS include** api field (default to \"soap\" if not specified in prompt)\n\n**Examples**\n\n**Insert new records**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Lead\"\n}\n```\nPrompt: \"Create new leads in Salesforce\"\n\n**Insert with ignore existing**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Vendor__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{VendorName}}'\"\n  }\n}\n```\nPrompt: \"Create vendor records while ignoring existing vendors\"\n\n**NOTE:** For \"ignore existing\" prompts, also remember that `ignoreExisting: true` belongs at a different level (not part of this salesforce configuration).\n\n**Update existing records**\n```json\n{\n  \"operation\": \"update\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Account\",\n  \"idLookup\": {\n    \"whereClause\": \"Id = '{{AccountId}}'\"\n  }\n}\n```\nPrompt: \"Update existing accounts in Salesforce\"\n\n**Upsert by external id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Contact\",\n  \"idLookup\": {\n    \"extract\": \"ExternalId\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nPrompt: \"Create or update contacts by External_ID__c\"\n\n**Addupdate (create or update)**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Customer__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{CustomerName}}'\"\n  }\n}\n```\nPrompt: \"Sync customers, create new or update existing\"\n\n**Important notes**\n- Operation value is case-insensitive (automatically converted to lowercase)\n- Invalid operation values will cause API validation errors\n- Each operation has different requirements for record identification\n- **CRITICAL for upsert**: MUST include the `upsert` object with `externalIdField` property\n- **CRITICAL for upsert**: MUST set `idLookup.extract` to map incoming field to External ID\n- **upsert** requires external ID field to be configured in Salesforce\n- **addupdate** is Celigo-specific and may not work with all Salesforce APIs\n- Operations must align with user's Salesforce permissions\n- Some operations support bulk processing more efficiently than others"},"api":{"type":"string","enum":["soap","rest","metadata","compositerecord"],"description":"**REQUIRED FIELD:** Specifies which Salesforce API to use for the import operation. Different APIs have different capabilities, performance characteristics, and use cases.\n\n**CRITICAL:** This field MUST ALWAYS be set. If no specific API is mentioned in the prompt, **DEFAULT to \"soap\"**.\n\n**Field behavior**\n- **REQUIRED** for all Salesforce imports\n- Determines the protocol and endpoint used for Salesforce communication\n- Influences request format, batch sizes, and response handling\n- Automatically converted to lowercase before processing\n- Affects available features and operation types\n- Impacts performance and error handling behavior\n\n**Api types**\n\n****soap** (Default - Most Common)**\n- **Purpose:** SOAP API with additional features like AllOrNone transaction control\n- **Best for:** Transactional imports requiring rollback capabilities\n- **Features:** AllOrNone header, enhanced transaction control, XML-based\n- **Limits:** Similar to REST but with more overhead\n- **Use when:** Need transaction rollback, legacy integrations\n- **Example:** \"Import invoices with all-or-none transaction guarantee\"\n\n        ****rest****\n- **Purpose:** Standard REST API for single-record and small-batch operations\n- **Best for:** Real-time imports, small datasets (< 200 records/call)\n- **Features:** Full CRUD operations, flexible querying, rich error messages\n- **Limits:** 2,000 records per API call\n- **Use when:** Standard import operations, real-time data sync\n- **Example:** \"Import leads from web form submissions\"\n\n****metadata****\n- **Purpose:** Metadata API for importing configuration and setup data\n- **Best for:** Deploying Salesforce configuration, not data records\n- **Features:** Deploy custom objects, fields, workflows, etc.\n- **Use when:** Deploying Salesforce configuration between orgs\n- **Example:** \"Deploy custom object definitions from non-production to production\"\n- **Note:** NOT for data records - only for metadata\n\n****compositerecord****\n- **Purpose:** Composite Graph API for related record trees\n- **Best for:** Creating multiple related records in one request\n- **Features:** Create parent-child relationships atomically\n- **Use when:** Importing hierarchical data (e.g., Account + Contacts + Opportunities)\n- **Example:** \"Import accounts with their related contacts in one operation\"\n\n**Implementation guidance**\n- **MUST ALWAYS SET THIS FIELD** - Never leave it undefined\n- **Default to \"rest\"** for most import operations (90% of use cases)\n- If prompt doesn't specify API type, use **\"soap\"**\n- Use **\"soap\"** only when AllOrNone transaction control is explicitly required\n- Use **\"metadata\"** only for configuration deployment, never for data\n- Use **\"compositerecord\"** for complex parent-child relationship imports\n- Consider API limits and performance characteristics for your data volume\n- Ensure the Salesforce connection supports the chosen API type\n\n**Default choice**\nWhen in doubt or when no API is mentioned in the prompt → **USE \"soap\"**\n\n**Examples**\n\n**Rest api (most common)**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"rest\",\n  \"sobject\": \"Lead\"\n}\n```\nPrompt: \"Import leads into Salesforce\"\n\n**Soap api with transaction control**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sobject\": \"Invoice__c\",\n  \"soap\": {\n    \"headers\": {\n      \"allOrNone\": true\n    }\n  }\n}\n```\nPrompt: \"Import invoices with all-or-none guarantee\"\n\n**Composite Record api**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"compositerecord\",\n  \"sobject\": \"Account\"\n}\n```\nPrompt: \"Import accounts with related contacts and opportunities\"\n\n**Important notes**\n- API value is case-insensitive (automatically converted to lowercase)\n- **REST API is the default and most commonly used**\n- SOAP API has slightly more overhead but provides transaction guarantees\n- Metadata API is for configuration only, not data records\n- Composite Record API requires careful relationship mapping\n- Each API has different rate limits and performance characteristics\n- Ensure your Salesforce connection and permissions support the chosen API"},"soap":{"type":"object","properties":{"headers":{"type":"object","properties":{"allOrNone":{"type":"boolean","description":"allOrNone specifies whether the Salesforce API operation should be executed atomically, ensuring that either all parts of the operation succeed or none do. When set to true, if any record in a batch operation fails, the entire transaction is rolled back and no changes are committed, preserving data integrity across the dataset. When set to false, the operation allows partial success, meaning some records may be processed and committed even if others fail, which can improve performance but requires careful error handling to manage partial failures. This property is primarily applicable to data manipulation operations such as create, update, delete, and upsert, and is critical for maintaining consistent and reliable data states in multi-record transactions.\n\n**Field behavior**\n- When true, enforces atomicity: all records in the batch must succeed or the entire operation is rolled back with no changes applied.\n- When false, permits partial success: successfully processed records are committed even if some records fail.\n- Applies mainly to batch DML operations including create, update, delete, and upsert.\n- Helps maintain data consistency by preventing partial updates when strict transactional integrity is required.\n- Influences how errors are reported and handled in the API response.\n- Affects transaction commit behavior and rollback mechanisms within the Salesforce platform.\n\n**Implementation guidance**\n- Set to true to guarantee strict data consistency and avoid partial data changes that could lead to inconsistent states.\n- Set to false to allow higher throughput and partial processing when some failures are acceptable or expected.\n- Default value is false if the property is omitted, allowing partial success by default.\n- When false, implement robust error handling to process partial success responses and handle individual record errors appropriately.\n- Verify that the target Salesforce API operation supports the allOrNone header before use.\n- Consider the impact on transaction duration and system resources when enabling atomic operations.\n- Use in scenarios where data integrity is paramount, such as financial or compliance-related data updates.\n\n**Examples**\n- allOrNone = true: Attempting to update 10 records where one record fails validation results in no records being updated, preserving atomicity.\n- allOrNone = false: Attempting to update 10 records where one record fails validation results in 9 records updated successfully and one failure reported.\n- allOrNone = true: Deleting multiple records in a batch will either delete all specified records or none if any deletion fails.\n- allOrNone = false: Creating multiple records where some violate unique constraints results in successful creation of valid records and"}},"description":"Headers to be included in the SOAP request sent to Salesforce, primarily used for authentication, session management, and transmitting additional metadata required by the Salesforce API. These headers form part of the SOAP envelope's header section and are essential for establishing and maintaining a valid session, specifying client options, and enabling various Salesforce API features. They can include standard headers such as SessionHeader for session identification, CallOptions for client-specific settings, and LoginScopeHeader for scoping login requests, as well as custom headers tailored to specific integration needs. Proper configuration of these headers ensures secure and successful communication with Salesforce services, enabling precise control over API interactions and session lifecycle.\n\n**Field behavior**\n- Defines the set of SOAP headers included in every request to the Salesforce API.\n- Facilitates passing of authentication credentials like session IDs or OAuth tokens.\n- Supports inclusion of metadata that controls API behavior or scopes requests.\n- Allows addition of custom headers required by specialized Salesforce integrations.\n- Headers persist across multiple API calls within the same session when set.\n- Modifies request context by influencing how Salesforce processes and authorizes API calls.\n- Enables fine-grained control over API call execution, such as setting query options or client identifiers.\n\n**Implementation guidance**\n- Construct headers as key-value pairs conforming to Salesforce’s SOAP header XML schema.\n- Include mandatory authentication headers such as SessionHeader with valid session IDs.\n- Validate all header values to prevent injection attacks or malformed requests.\n- Use headers to specify client information, API versioning, or organizational context as needed.\n- Ensure header names and structures strictly follow Salesforce SOAP API documentation.\n- Secure sensitive header data to prevent unauthorized access or leakage.\n- Update headers appropriately when session tokens expire or when switching contexts.\n- Test header configurations thoroughly to confirm compatibility with Salesforce API versions.\n- Consider using namespaces and proper XML serialization to maintain SOAP compliance.\n- Handle errors gracefully when headers are missing or invalid to improve debugging.\n\n**Examples**\n- `{ \"SessionHeader\": { \"sessionId\": \"00Dxx0000001gEREAY\" } }` — authenticates the session.\n- `{ \"CallOptions\": { \"client\": \"MyAppClient\" } }` — specifies client application details.\n- `{ \"LoginScopeHeader\": { \"organizationId\": \"00Dxx0000001gEREAY\" } }` — scopes login to a specific organization.\n- `{ \"CustomHeader\": { \"customField\": \"customValue\" } }` — example of a"},"batchSize":{"type":"number","description":"The number of records to be processed in each batch during Salesforce SOAP API operations. This property controls the size of data chunks sent or retrieved in a single API call, optimizing performance and resource utilization by balancing throughput and system resource consumption. Adjusting the batch size directly impacts the number of API calls made, the processing speed, memory usage, and network load, enabling fine-tuning of data operations to suit specific environment constraints and performance goals. Proper configuration of batchSize helps prevent API throttling, reduces the risk of timeouts, and ensures efficient handling of large datasets by segmenting data into manageable portions.\n\n**Field behavior**\n- Determines the count of records included in each batch request to the Salesforce SOAP API.\n- Directly influences the total number of API calls required when handling large datasets.\n- Affects processing throughput, latency, and memory usage during data operations.\n- Can be dynamically adjusted to optimize performance based on system capabilities and network conditions.\n- Impacts error handling and retry mechanisms by controlling batch granularity.\n- Influences network load and the likelihood of encountering API limits or timeouts.\n\n**Implementation guidance**\n- Choose a batch size that adheres to Salesforce API limits and recommended best practices.\n- Balance between larger batch sizes (which reduce API calls but increase memory and processing load) and smaller batch sizes (which increase API calls but reduce resource consumption).\n- Ensure the batchSize value is a positive integer within the allowed range for the specific Salesforce operation.\n- Continuously monitor API response times, error rates, and resource utilization to fine-tune the batch size for optimal performance.\n- Default to Salesforce-recommended batch sizes when uncertain or when starting configuration.\n- Consider network stability and latency when selecting batch size to minimize timeouts and failures.\n- Validate batch size compatibility with other integration components and middleware.\n- Test batch size settings in a staging environment before deploying to production to avoid unexpected failures.\n\n**Examples**\n- `batchSize: 200` — Processes 200 records per batch, a common default for many Salesforce operations.\n- `batchSize: 500` — Processes 500 records per batch, often the maximum allowed for certain Salesforce SOAP API calls.\n- `batchSize: 50` — Processes smaller batches suitable for environments with limited memory or network bandwidth.\n- `batchSize: 100` — A moderate batch size balancing throughput and resource usage in typical scenarios.\n\n**Important notes**\n- Salesforce SOAP API enforces maximum batch size limits (commonly 200"}},"description":"Specifies the comprehensive configuration settings required for integrating with the Salesforce SOAP API. This property includes all essential parameters to establish, authenticate, and manage SOAP-based communication with Salesforce services, enabling a wide range of operations such as creating, updating, deleting, querying, and executing actions on Salesforce objects. It encompasses detailed configurations for endpoint URLs, authentication credentials (such as session IDs or OAuth tokens), SOAP headers, session management details, timeout settings, and error handling mechanisms to ensure reliable, secure, and efficient interactions. The configuration must strictly adhere to Salesforce's WSDL definitions and security standards, supporting robust SOAP API usage within the application environment.\n\n**Field behavior**\n- Defines all connection, authentication, and operational parameters necessary for Salesforce SOAP API interactions.\n- Facilitates the construction and transmission of properly structured SOAP requests and the parsing of SOAP responses.\n- Manages the session lifecycle, including token acquisition, refresh, and timeout handling to maintain persistent connectivity.\n- Supports comprehensive error detection and handling of SOAP faults, API exceptions, and network issues to maintain integration stability.\n- Enables customization of SOAP headers, namespaces, and envelope structures as required by specific Salesforce API calls.\n- Allows configuration of retry policies and backoff strategies in response to transient errors or rate limiting.\n- Supports environment-specific configurations, such as production, non-production, or custom Salesforce domains.\n\n**Implementation guidance**\n- Configure endpoint URLs and authentication credentials accurately based on the Salesforce environment (production, non-production, or custom domains).\n- Validate SOAP envelope structure, namespaces, and compliance with the latest Salesforce WSDL specifications to prevent communication errors.\n- Implement robust error handling to capture, log, and respond appropriately to SOAP faults, API limit exceptions, and network failures.\n- Securely store and transmit sensitive information such as session IDs, OAuth tokens, passwords, and certificates, following best security practices.\n- Monitor Salesforce API usage limits and implement throttling or queuing mechanisms to avoid exceeding quotas and service disruptions.\n- Set timeout values thoughtfully to balance between preventing hanging requests and allowing sufficient time for valid operations to complete.\n- Regularly update the SOAP configuration to align with Salesforce API version changes, WSDL updates, and evolving security requirements.\n- Incorporate logging and monitoring to track SOAP request and response cycles for troubleshooting and performance optimization.\n- Ensure compatibility with Salesforce security protocols, including TLS versions and encryption standards.\n\n**Examples**\n- Specifying the Salesforce SOAP API endpoint URL along with a valid session ID or OAuth token for authentication.\n- Defining custom SOAP headers"},"sObjectType":{"type":"string","description":"sObjectType specifies the exact Salesforce object type that the API operation will target, such as standard objects like Account, Contact, Opportunity, or custom objects like CustomObject__c. This property is essential for defining the schema context of the API request, determining which fields are relevant, and directing the operation to the correct Salesforce data entity. It plays a critical role in data validation, processing logic, and endpoint routing within the Salesforce environment, ensuring that operations such as create, update, delete, or query are executed against the intended object type with precision. Accurate specification of sObjectType enables dynamic adjustment of request payloads and response handling based on the object’s schema and enforces compliance with Salesforce’s API naming conventions and permissions model.  \n**Field behavior:**  \n- Identifies the specific Salesforce object (sObject) type targeted by the API operation.  \n- Determines the applicable schema, field availability, and validation rules based on the selected object.  \n- Must exactly match a valid Salesforce object API name recognized by the connected Salesforce instance, including case sensitivity.  \n- Routes the API request to the appropriate Salesforce object endpoint for the intended operation.  \n- Influences dynamic field mapping, data transformation, and response parsing processes.  \n- Supports both standard and custom Salesforce objects, with custom objects requiring the “__c” suffix.  \n**Implementation guidance:**  \n- Validate sObjectType values against the current Salesforce object metadata in the target environment to prevent invalid requests.  \n- Enforce exact case-sensitive matching with Salesforce API object names (e.g., \"Account\" not \"account\").  \n- Support and recognize both standard objects and custom objects, ensuring custom objects include the “__c” suffix.  \n- Implement robust error handling for invalid, unsupported, or deprecated sObjectType values to provide clear feedback.  \n- Regularly update the list of allowed sObjectType values to reflect changes in the Salesforce schema or environment.  \n- Use sObjectType to dynamically tailor API request payloads, field selections, and response parsing logic.  \n- Consider object-level permissions and authentication scopes when processing requests involving specific sObjectTypes.  \n**Examples:**  \n- \"Account\"  \n- \"Contact\"  \n- \"Opportunity\"  \n- \"Lead\"  \n- \"CustomObject__c\"  \n- \"Case\"  \n- \"Campaign\"  \n**Important notes:**  \n- Custom Salesforce objects must be referenced using their exact API names, including the “__c” suffix.  \n-"},"idLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Please specify which field from your export data should map to External ID. It is very important to pick a field that is both unique, and guaranteed to always have a value when exported (otherwise the Salesforce Upsert operation will fail). If sample data is available then a select list should be displayed here to help you find the right field from your export data. If sample data is not available then you will need to manually type the field id that should map to External ID.\n\n**Field behavior**\n- Specifies which field from the incoming data contains the External ID value\n- **Used for upsert operations ONLY**\n- Maps the incoming data field to the Salesforce External ID field specified in `upsert.externalIdField`\n- Must reference a field that exists in the incoming data payload\n- The field value must be unique and always populated to ensure reliable matching\n- Generates field mapping at runtime to match incoming records with Salesforce records\n\n**When to use**\n- **upsert operations ONLY** — REQUIRED when operation is \"upsert\"\n- Works together with `upsert.externalIdField` to enable External ID matching\n\n**When not to use**\n- **update** — use `whereClause` instead\n- **delete** — use `whereClause` instead\n- **addupdate** — use `whereClause` instead\n- **insert with ignoreExisting** — use `whereClause` instead\n\n**Implementation guidance**\n- Choose a field from incoming data that contains unique, non-null values\n- Ensure the field is always populated in the incoming records to prevent operation failures\n- This field's value will be matched against the Salesforce External ID field specified in `upsert.externalIdField`\n- Use the exact field name as it appears in the incoming data (case-sensitive)\n- If sample data is available, validate that the field exists and has appropriate values\n- Test with sample data to ensure matching works correctly before production deployment\n\n**Example**\n\n**Upsert by External id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nMaps incoming field \"CustomerNumber\" → Salesforce field \"External_ID__c\"\n\n**How it works:**\n- Incoming record has `{\"CustomerNumber\": \"CUST-12345\", \"Name\": \"John\"}`\n- System extracts \"CUST-12345\" from CustomerNumber\n- Searches Salesforce External_ID__c field for \"CUST-12345\"\n- If found: updates the existing record\n- If not found: creates a new record with External_ID__c = \"CUST-12345\"\n\n**Common field names**\n- \"ExternalId\" — incoming field containing external identifier\n- \"CustomerId\" — incoming field with unique customer number\n- \"AccountNumber\" — incoming field with account number\n- \"SourceSystemId\" — incoming field from source system\n\n**Important notes**\n- **ONLY use for upsert operations**\n- For all other operations (update, delete, addupdate, insert with ignoreExisting), use `whereClause` instead\n- Field must exist in incoming data and be consistently populated\n- Works with `upsert.externalIdField` to enable External ID matching\n- Missing or null values will cause the upsert operation to fail\n- Field name is case-sensitive and must match exactly\n- If both `extract` and `whereClause` are set, `extract` takes precedence for upsert"},"whereClause":{"type":"string","description":"To determine if a record exists in Salesforce, define a SOQL query WHERE clause for the sObject type selected above. For example, if you are importing contact records with a unique email, your WHERE clause should identify contacts with the same email.\n\nHandlebars statements are allowed, and more complex WHERE clauses can contain AND and OR logic. For example, when the product name alone is not unique, you can define a WHERE clause to find any records where the product name exists AND the product also belongs to a specific product family, resulting in a unique product record that matches both criteria.\n\nTo find string values that contain spaces, wrap them in single quotes, for example:\n\nName = 'John Smith'\n\n**Field behavior**\n- Specifies a SOQL conditional expression to match existing records in Salesforce\n- Omit the \"WHERE\" keyword - only provide the condition expression\n- Supports Handlebars {{fieldName}} syntax to reference incoming data fields\n- Supports SOQL operators: =, !=, >, <, >=, <=, LIKE, IN, NOT IN\n- Supports logical operators: AND, OR, NOT\n- Can use parentheses for grouping complex conditions\n- The clause is evaluated for each incoming record to find matching Salesforce records\n\n**When to use (REQUIRED FOR)**\n- **update** — REQUIRED to identify which records to update\n- **delete** — REQUIRED to identify which records to delete\n- **addupdate** — REQUIRED to identify existing records (update if found, insert if not)\n- **insert with ignoreExisting: true** — REQUIRED to check if records exist (skip if found)\n\n**When not to use**\n- **upsert** — DO NOT use; use `extract` instead\n- **insert without ignoreExisting** — Not needed for simple inserts\n\n**Implementation guidance**\n- Write the clause following SOQL syntax rules, omitting the \"WHERE\" keyword\n- Use {{fieldName}} syntax to reference values from incoming records\n- Wrap string values containing spaces in single quotes: `Name = '{{FullName}}'`\n- Numeric and boolean values don't need quotes: `Age = {{Age}}`\n- Use indexed fields (Email, External ID) for better performance\n- Keep the clause selective to minimize records returned\n- Validate syntax before deployment to prevent runtime errors\n\n**Examples**\n\n**Update by Email**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nFinds records with matching Email for update\n\n**Insert with ignore existing by Email**\n```json\n{\n  \"operation\": \"insert\",\n  \"ignoreExisting\": true,\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nChecks if a record with matching Email exists; if found, skips creation\n\n**Addupdate (create or update) by Email**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nUpdates if Email matches, creates new record if not found\n\n**Delete by Account Number**\n```json\n{\n  \"operation\": \"delete\",\n  \"idLookup\": {\n    \"whereClause\": \"AccountNumber = '{{AcctNum}}'\"\n  }\n}\n```\nFinds records with matching AccountNumber for deletion\n\n**Complex and logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{ProductName}}' AND ProductFamily__c = '{{Family}}'\"\n  }\n}\n```\nMatches records where both Name AND ProductFamily match\n\n**Complex or logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}' OR Phone = '{{Phone}}'\"\n  }\n}\n```\nMatches records where Email OR Phone matches\n\n**String with spaces**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{FullName}}'\"\n  }\n}\n```\nNote: Single quotes around {{FullName}} handle names with spaces like \"John Smith\"\n\n**Important notes**\n- **REQUIRED** for update, delete, addupdate, and insert (with ignoreExisting)\n- DO NOT use for upsert operations - use `extract` instead\n- Use {{fieldName}} to reference incoming data fields\n- String values with spaces must be wrapped in single quotes\n- Numeric and boolean values don't need quotes\n- Poor clause design can impact performance and hit governor limits\n- Always test in a non-production environment before production deployment"}},"description":"Configuration for looking up existing Salesforce records. **This object should ONLY contain either `extract` OR `whereClause` - NOT both, and NOT operation or other properties.**\n\n**What this object contains**\nThis object should contain ONLY ONE of these properties:\n- `extract` — for upsert operations ONLY\n- `whereClause` — for update, delete, addupdate, and insert (with ignoreExisting)\n\n**DO NOT include operation, upsert, or any other properties here** - those belong at the parent salesforce level.\n\n**Field behavior**\n- Contains two strategies for finding existing records: `extract` and `whereClause`\n- **`extract`**: Used ONLY for upsert operations\n- **`whereClause`**: Used for update, delete, addupdate, and insert (with ignoreExisting)\n- Generates field mapping or SOQL queries at runtime to match records\n- Critical for preventing duplicate record creation and enabling record updates\n\n**When to use each field**\n\n**Use `extract` (upsert ONLY)**\n- **upsert** — REQUIRED: Maps incoming field to External ID field\n\n**Use `whereClause` (all other operations)**\n- **update** — REQUIRED: Identifies records to update via SOQL query\n- **delete** — REQUIRED: Identifies records to delete via SOQL query\n- **addupdate** — REQUIRED: Identifies existing records via SOQL query\n- **insert with ignoreExisting: true** — REQUIRED: Checks if records exist via SOQL query\n\n**Leave empty**\n- **insert without ignoreExisting** — No lookup needed for simple inserts\n\n**What to set (idLookup object content)**\n\n**For upsert**\n```json\n{\n  \"extract\": \"CustomerNumber\"\n}\n```\n\n**For update/delete/addupdate**\n```json\n{\n  \"whereClause\": \"Email = '{{Email}}'\"\n}\n```\n\n**Parent context examples (for reference)**\nThese show the complete parent-level structure. **idLookup itself should only contain extract or whereClause.**\n\n**Upsert (parent context)**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\n\n**Update (parent context)**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\n\n**Implementation guidance**\n- Use `extract` ONLY for upsert operations (with `upsert.externalIdField`)\n- Use `whereClause` for update, delete, addupdate, and insert with ignoreExisting\n- Do not set both `extract` and `whereClause` for the same operation\n- **ONLY set extract or whereClause - do not nest operation or other properties**\n- Ensure the lookup strategy matches the operation type\n- Test thoroughly in a non-production environment before production deployment\n\n**Important notes**\n- **This object ONLY contains `extract` or `whereClause`**\n- **`extract`** is for upsert ONLY - maps to External ID field\n- **`whereClause`** is for all other operations requiring lookups\n- Do NOT nest `operation`, `upsert`, or other properties here - those belong at the parent salesforce level\n- For upsert: `extract` + `upsert.externalIdField` work together (both at salesforce level)\n- For other operations: `whereClause` enables SOQL-based record matching\n- Incorrect configuration can cause duplicates or failed operations"},"upsert":{"type":"object","properties":{"externalIdField":{"type":"string","description":"Specifies the API name of the External ID field in Salesforce that will be used to match records during an upsert operation. This is the TARGET field in Salesforce, while `idLookup.extract` specifies the SOURCE field from incoming data.\n\n**Field behavior**\n- Identifies which Salesforce field serves as the external ID for record matching\n- Must be a field marked as \"External ID\" in Salesforce object configuration\n- Works together with `idLookup.extract` to enable upsert matching\n- The field must be indexed and unique in Salesforce\n- Supports text, number, and email field types configured as external IDs\n- Required when `operation: \"upsert\"`\n\n**How it works with idLookup.extract**\n- **`idLookup.extract`**: Specifies which incoming data field contains the matching value (e.g., \"CustomerNumber\")\n- **`externalIdField`**: Specifies which Salesforce field to match against (e.g., \"External_ID__c\")\n- Together they create the match: incoming \"CustomerNumber\" → Salesforce \"External_ID__c\"\n\n**Implementation guidance**\n- Use the exact Salesforce API name of the field (case-sensitive)\n- Verify the field is marked as \"External ID\" in Salesforce object settings\n- Ensure the field is indexed for optimal performance\n- Confirm the integration user has read/write access to this field\n- The field must have unique values to prevent ambiguous matches\n- Test in a non-production environment before production to verify configuration\n\n**Examples**\n\n**Standard External id field**\n```\n\"AccountNumber\"\n```\nStandard Account field marked as External ID\n\n**Custom External id field**\n```\n\"Custom_External_Id__c\"\n```\nCustom field marked as External ID\n\n**Email as External id**\n```\n\"Email\"\n```\nEmail field configured as External ID on Contact\n\n**NetSuite id as External id**\n```\n\"netsuite_conn__NetSuite_Id__c\"\n```\nNetSuite connector External ID field\n\n**How it works (Parent Context)**\nAt the parent `salesforce` level, the complete configuration looks like:\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nWhere:\n- `operation: \"upsert\"` — at salesforce level\n- `idLookup.extract: \"CustomerNumber\"` — incoming field, at salesforce level\n- `upsert.externalIdField: \"External_ID__c\"` — **THIS field**, inside upsert object\n\n**Common external id fields**\n- \"AccountNumber\" — standard Account field\n- \"Email\" — when configured as External ID on Contact\n- \"Custom_External_Id__c\" — custom field on any object\n- \"External_ID__c\" — common naming convention\n- \"Source_System_Id__c\" — for multi-system integrations\n\n**Important notes**\n- Field must be explicitly marked as \"External ID\" in Salesforce\n- Not all fields can be External IDs (must be text, number, or email type)\n- Missing or null values in this field will cause upsert to fail\n- If multiple records match the External ID, upsert will fail\n- Always use with `idLookup.extract` for upsert operations"}},"description":"Configuration for Salesforce upsert operations. This object contains ONLY the `externalIdField` property.\n\n**CRITICAL: This component is REQUIRED when `operation: \"upsert\"`. Always include this component for upsert operations.**\n\n**Field behavior**\n- **REQUIRED** when `operation: \"upsert\"`\n- Contains configuration specific to upsert behavior\n- **ONLY contains ONE property: `externalIdField`**\n- Works in conjunction with `idLookup.extract` (which is at the SAME level as this upsert object, NOT nested within it)\n\n**When to use**\n- **REQUIRED** when `operation` is \"upsert\" - MUST be included\n- Leave undefined for insert, update, delete, or addupdate operations\n- Without this object, upsert operations will fail\n\n**Why IT'S required**\n- Upsert operations need to know which Salesforce field to use for matching\n- The `externalIdField` property specifies the Salesforce External ID field\n- This works with `idLookup.extract` to enable record matching\n- Missing this object will cause the import to fail\n\n**Decision**\n**Return true/include this component if `operation: \"upsert\"` is set. This is REQUIRED for all upsert operations.**\n\n**What to set**\n**This object should ONLY contain:**\n```json\n{\n  \"externalIdField\": \"External_ID__c\"\n}\n```\n\n**DO NOT include operation, idLookup, or any other properties here** - those belong at the parent salesforce level.\n\n**Parent context (for reference only)**\nAt the parent `salesforce` level, you will also set:\n- `operation: \"upsert\"` (at salesforce level)\n- `idLookup.extract` (at salesforce level)\n- `upsert.externalIdField` (THIS object)\n\n**Implementation guidance**\n- Always set `externalIdField` when using upsert\n- Ensure the specified External ID field exists in Salesforce\n- Verify the field is marked as \"External ID\" in Salesforce\n- **ONLY set the externalIdField property - nothing else**\n\n**Important notes**\n- **This object ONLY contains `externalIdField`**\n- Do NOT nest `operation`, `idLookup`, or other properties here\n- Those properties belong at the parent salesforce level, not nested in this upsert object\n- May be extended with additional upsert-specific options in the future\n- Only relevant when operation is \"upsert\""},"upsertpicklistvalues":{"type":"object","properties":{"type":{"type":"string","enum":["picklist","multipicklist"],"description":"Specifies the type of picklist field being managed. Salesforce supports single-select picklists and multi-select picklists.\n\n**Field behavior**\n- Determines whether the picklist allows single or multiple value selection\n- Affects UI rendering and validation in Salesforce\n- Automatically converted to lowercase before processing\n\n**Picklist types**\n\n****picklist****\n- Standard single-select picklist\n- User can select only one value at a time\n- Most common picklist type\n\n****multipicklist****\n- Multi-select picklist\n- User can select multiple values (separated by semicolons)\n- Use for categories, tags, or multi-value selections\n\n**Examples**\n- \"picklist\" — for Status, Priority, Category fields\n- \"multipicklist\" — for Tags, Skills, Interests fields"},"fullName":{"type":"string","description":"The API name of the picklist field in Salesforce, including the object name.\n\n**Field behavior**\n- Fully qualified field name: ObjectName.FieldName__c\n- Must match exact Salesforce API name\n- Case-sensitive\n- Required for picklist value upsert operations\n\n**Examples**\n- \"Account.MyPicklist__c\"\n- \"Contact.Status__c\"\n- \"CustomObject__c.Category__c\""},"label":{"type":"string","description":"The display label for the picklist field in Salesforce UI.\n\n**Field behavior**\n- Human-readable label shown in Salesforce interface\n- Can contain spaces and special characters\n- Does not need to match API name\n\n**Examples**\n- \"My Picklist\"\n- \"Product Category\"\n- \"Customer Status\""},"visibleLines":{"type":"number","description":"For multi-select picklists, specifies how many lines are visible in the UI without scrolling.\n\n**Field behavior**\n- Only applies to multipicklist fields\n- Controls height of the picklist selection box\n- Default is typically 4-5 lines\n\n**Examples**\n- 3 — shows 3 values at once\n- 5 — shows 5 values at once\n- 10 — shows more values for large lists"}},"description":"Configuration for managing Salesforce picklist field metadata when using the `upsertpicklistvalues` operation. This enables dynamic creation or updates to picklist field definitions.\n\n**Field behavior**\n- Used ONLY when `operation: \"upsertpicklistvalues\"`\n- Manages picklist field metadata, not picklist values themselves\n- Enables programmatic picklist field management\n- Requires appropriate Salesforce metadata permissions\n\n**When to use**\n- When operation is set to \"upsertpicklistvalues\"\n- For managing picklist field definitions programmatically\n- When syncing picklist configurations between orgs\n- For automated picklist field deployment\n\n**Important notes**\n- This manages the FIELD itself, not the individual picklist VALUES\n- Requires Salesforce API and Metadata API permissions\n- Should only be set when operation is \"upsertpicklistvalues\"\n- Most imports won't use this - it's for metadata management"},"removeNonSubmittableFields":{"type":"boolean","description":"Indicates whether fields that are not eligible for submission to Salesforce—such as read-only, system-generated, formula, or otherwise restricted fields—should be automatically excluded from the data payload before it is sent. This feature helps prevent submission errors by ensuring that only valid, submittable fields are included in the API request, thereby improving data integrity and reducing the likelihood of operation failures. It operates exclusively on the outgoing submission payload without modifying the original source data, making it a safe and effective way to sanitize data before interaction with Salesforce APIs.\n\n**Field behavior**\n- When enabled (set to true), the system analyzes the outgoing data payload and removes any fields identified as non-submittable based on Salesforce metadata, field-level security, and API constraints.\n- When disabled (set to false) or omitted, all fields present in the payload are submitted as-is, which may lead to errors if restricted, read-only, or system-managed fields are included.\n- Enhances data integrity by proactively filtering out fields that could cause rejection or failure during Salesforce data operations.\n- Applies only to the outgoing submission payload, ensuring the original source data remains unchanged and intact.\n- Dynamically adapts to different Salesforce objects, API versions, and user permissions by referencing up-to-date metadata and security settings.\n\n**Implementation guidance**\n- Enable this flag in integration scenarios where the data schema may include fields that Salesforce does not accept for the target object or API version.\n- Maintain an up-to-date cache or real-time access to Salesforce metadata to accurately reflect field-level permissions, submittability, and API changes.\n- Implement logging or reporting mechanisms to track which fields are removed, aiding in auditing, troubleshooting, and transparency.\n- Use in conjunction with other data validation, transformation, and permission-checking steps to ensure clean, compliant data submissions.\n- Thoroughly test in development and staging environments to understand the impact on data flows, error handling, and downstream processes.\n- Consider user roles and permissions, as submittability may vary depending on the authenticated user's access rights and profile settings.\n\n**Examples**\n- `removeNonSubmittableFields: true` — Automatically removes read-only or system-managed fields such as CreatedDate, LastModifiedById, or formula fields before submission.\n- `removeNonSubmittableFields: false` — Submits all fields, potentially causing errors if restricted or read-only fields are included.\n- Omitting the property defaults to false, meaning no automatic removal of non-submittable"},"document":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the document within the Salesforce system, serving as the primary key that distinctly identifies each document record. This identifier is immutable once assigned and must not be altered or reused to maintain data integrity. It is essential for referencing the document in API calls, queries, integrations, and any operations involving the document. The ID is a base-62 encoded string that can be either 15 characters long, which is case-sensitive, or 18 characters long, which includes a case-insensitive checksum to enhance reliability in external systems. Proper handling of this ID ensures accurate linkage and manipulation of document records across various Salesforce processes and integrations.  \n**Field behavior:**  \n- Acts as the primary key uniquely identifying a document record within Salesforce.  \n- Immutable after creation; should never be changed or reassigned.  \n- Used consistently to reference the document across API calls, queries, and integrations.  \n- Case-sensitive and may vary in length (15 or 18 characters).  \n- Serves as a critical reference point for establishing relationships with other records.  \n**Implementation guidance:**  \n- Must be generated exclusively by Salesforce or the managing system to guarantee uniqueness.  \n- Should be handled as a string to accommodate Salesforce’s ID formats.  \n- Validate incoming IDs against Salesforce ID format standards to ensure correctness.  \n- Avoid manual generation or modification of IDs to prevent data inconsistencies.  \n- Use the 18-character format when possible to leverage the checksum for case-insensitive environments.  \n**Examples:**  \n- \"0151t00000ABCDE\"  \n- \"0151t00000ABCDEAAA\"  \n**Important notes:**  \n- IDs are case-sensitive; 15-character IDs are case-sensitive, while 18-character IDs include a case-insensitive checksum.  \n- Do not manually create or alter IDs; always use Salesforce-generated values.  \n- This ID is critical for performing document updates, deletions, retrievals, and establishing relationships.  \n- Using the correct ID format is essential for API compatibility and data integrity.  \n**Dependency chain:**  \n- Required for all operations on salesforce.document entities, including CRUD actions.  \n- Other fields and processes may reference this ID to link related records or trigger actions.  \n- Integral to workflows, triggers, and integrations that depend on document identification.  \n**Technical** DETAILS:  \n- Salesforce record IDs are base-62 encoded strings.  \n- IDs can be 15 characters (case-sensitive) or 18"},"name":{"type":"string","description":"The name of the document as stored in Salesforce, serving as a unique and human-readable identifier within the Salesforce environment. This string value represents the document's title or label, enabling users and systems to easily recognize, retrieve, and manage the document across user interfaces, search functionalities, and API interactions. It is a critical attribute used to reference the document in various operations such as creation, updates, searches, display contexts, and integration workflows, ensuring clear identification and organization within folders or libraries. The name plays a pivotal role in document management by facilitating sorting, filtering, and categorization, and it must adhere to Salesforce’s naming conventions and constraints to maintain system integrity and usability.\n\n**Field behavior**\n- Acts as the primary human-readable identifier for the document.\n- Used in UI elements, search queries, and API calls to locate or reference the document.\n- Typically mandatory when creating or updating document records.\n- Should be unique within the scope of the relevant Salesforce object, folder, or library to avoid ambiguity.\n- Influences sorting, filtering, and organization of documents within lists or folders.\n- Changes to the name may trigger updates or validations in dependent systems or references.\n- Supports international characters to accommodate global users.\n\n**Implementation guidance**\n- Choose descriptive yet concise names to enhance clarity, usability, and discoverability.\n- Validate input to exclude unsupported special characters and ensure compliance with Salesforce naming rules.\n- Enforce uniqueness constraints within the relevant context to prevent conflicts and confusion.\n- Consider the impact of renaming documents on existing references, hyperlinks, integrations, and audit trails.\n- Account for potential length restrictions and implement truncation or user prompts as necessary.\n- Support internationalization by allowing UTF-8 encoded characters where appropriate.\n- Implement validation to prevent use of reserved words or prohibited characters.\n\n**Examples**\n- \"Quarterly Sales Report Q1 2024\"\n- \"Employee Handbook\"\n- \"Project Plan - Alpha Release\"\n- \"Marketing Strategy Overview\"\n- \"Client Contract - Acme Corp\"\n\n**Important notes**\n- The maximum length of the name may be limited (commonly up to 255 characters), depending on Salesforce configuration.\n- Renaming a document can affect external references, hyperlinks, or integrations relying on the original name.\n- Case sensitivity in name comparisons may vary based on Salesforce settings and should be considered when implementing logic.\n- Special characters and whitespace handling should align with Salesforce’s accepted standards to avoid errors.\n- Avoid using reserved words or prohibited characters to ensure compatibility and"},"folderId":{"type":"string","description":"The unique identifier of the folder within Salesforce where the document is stored. This ID is essential for organizing, categorizing, and managing documents efficiently within the Salesforce environment. It enables precise association of documents to specific folders, facilitating streamlined retrieval, access control, and hierarchical organization of document assets. By linking a document to a folder, it inherits the folder’s permissions and visibility settings, ensuring consistent access management and supporting structured document workflows. Proper use of this identifier allows for effective filtering, querying, and management of documents based on their folder location, which is critical for maintaining an organized and secure document repository.\n\n**Field behavior**\n- Identifies the exact folder containing the document.\n- Associates the document with a designated folder for structured organization.\n- Must reference a valid, existing folder ID within Salesforce.\n- Typically represented as a string conforming to Salesforce ID formats (15- or 18-character).\n- Influences document visibility and access based on folder permissions.\n- Changing the folderId moves the document to a different folder, affecting its context and access.\n- Inherits folder-level sharing and security settings automatically.\n\n**Implementation guidance**\n- Verify that the folderId exists and is accessible in the Salesforce instance before assignment.\n- Validate the folderId format to ensure it matches Salesforce’s 15- or 18-character ID standards.\n- Use this field to filter, query, or categorize documents by their folder location in API operations.\n- When creating or updating documents, setting this field assigns or moves the document to the specified folder.\n- Handle permission checks to ensure the user or integration has rights to assign or access the folder.\n- Consider the impact on sharing rules, workflows, and automation triggered by folder changes.\n- Ensure case sensitivity is preserved when handling folder IDs.\n\n**Examples**\n- \"00l1t000003XyzA\" (example of a Salesforce folder ID)\n- \"00l5g000004AbcD\"\n- \"00l9m00000FghIj\"\n\n**Important notes**\n- The folderId must be valid and the folder must be accessible by the user or integration performing the operation.\n- Using an incorrect or non-existent folderId will cause errors or result in improper document categorization.\n- Folder-level permissions in Salesforce can restrict the ability to assign documents or access folder contents.\n- Changes to folderId may affect document sharing, visibility settings, and trigger related workflows.\n- Folder IDs are case-sensitive and must be handled accordingly.\n- Moving documents between folders"},"contentType":{"type":"string","description":"The MIME type of the document, specifying the exact format of the file content stored within the Salesforce document. This field enables systems and applications to accurately identify, process, render, or handle the document by indicating its media type, such as PDF, image, audio, or text formats. It adheres to standard MIME type conventions (type/subtype) and may include additional parameters like character encoding when necessary. Properly setting this field ensures seamless integration, correct display, appropriate application usage, and reliable content negotiation across diverse platforms and services. It plays a critical role in workflows, security scanning, compliance validation, and API interactions by guiding how the document is stored, transmitted, and interpreted.\n\n**Field behavior**\n- Defines the media type of the document content to guide processing, rendering, and handling.\n- Identifies the specific file format (e.g., PDF, JPEG, DOCX) for compatibility and validation purposes.\n- Facilitates content negotiation and appropriate response handling between systems and applications.\n- Typically formatted as a standard MIME type string (type/subtype), optionally including parameters such as charset.\n- Influences document handling in workflows, security scans, user interfaces, and API interactions.\n- Serves as a key attribute for determining how the document is stored, transmitted, and displayed.\n\n**Implementation guidance**\n- Always use valid MIME types conforming to official standards (e.g., \"application/pdf\", \"image/png\") to ensure broad compatibility.\n- Verify that the contentType accurately reflects the actual file content to prevent processing errors or misinterpretation.\n- Update the contentType promptly if the document format changes to maintain consistency and reliability.\n- Utilize this field to enable correct content handling in APIs, integrations, client applications, and automated workflows.\n- Include relevant parameters (such as charset for text-based documents) when applicable to provide additional context.\n- Consider validating the contentType against the file extension and content during upload or processing to enhance data integrity.\n\n**Examples**\n- \"application/pdf\" for PDF documents.\n- \"image/jpeg\" for JPEG image files.\n- \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" for Microsoft Word DOCX files.\n- \"text/plain; charset=utf-8\" for plain text files with UTF-8 encoding.\n- \"application/json\" for JSON formatted documents.\n- \"audio/mpeg\" for MP3 audio files.\n\n**Important notes**\n- Incorrect or mismatched contentType values can cause improper document rendering, processing failures,"},"developerName":{"type":"string","description":"developerName is a unique, developer-friendly identifier assigned to the document within the Salesforce environment. It acts as a stable and consistent reference used primarily in programmatic contexts such as Apex code, API calls, metadata configurations, deployment scripts, and integration workflows. This identifier ensures reliable access, manipulation, and management of the document across various Salesforce components and external systems. The value should be concise, descriptive, and strictly adhere to Salesforce naming conventions to prevent conflicts and ensure maintainability. Typically, it employs camelCase or underscores, excludes spaces and special characters, and is designed to remain unchanged over time to avoid breaking dependencies or integrations.\n\n**Field behavior**\n- Serves as a unique and immutable identifier for the document within the Salesforce org.\n- Widely used in code, APIs, metadata, deployment processes, and integrations to reference the document reliably.\n- Should remain stable and not be altered frequently to maintain integration and system stability.\n- Enforces naming conventions that disallow spaces and special characters, favoring camelCase or underscores.\n- Case-insensitive in many contexts but consistent casing is recommended for clarity and maintainability.\n- Acts as a key reference point in metadata relationships and dependency mappings.\n\n**Implementation guidance**\n- Ensure the developerName is unique within the Salesforce environment to prevent naming collisions.\n- Follow Salesforce naming rules: start with a letter, use only alphanumeric characters and underscores.\n- Avoid reserved keywords, spaces, special characters, and punctuation marks.\n- Keep the identifier concise yet descriptive enough to clearly convey the document’s purpose or function.\n- Validate the developerName against Salesforce’s character set, length restrictions, and naming policies before deployment.\n- Plan for the developerName to remain stable post-deployment to avoid breaking references and integrations.\n- Use consistent casing conventions (camelCase or underscores) to improve readability and reduce errors.\n- Incorporate meaningful terms that reflect the document’s role or content for easier identification.\n\n**Examples**\n- \"InvoiceTemplate\"\n- \"CustomerAgreementDoc\"\n- \"SalesReport2024\"\n- \"ProductCatalog_v2\"\n- \"EmployeeOnboardingGuide\"\n\n**Important notes**\n- Changing the developerName after deployment can disrupt integrations, metadata references, and dependent components.\n- It is distinct from the document’s display name, which can include spaces and be more user-friendly.\n- While Salesforce treats developerName as case-insensitive in many scenarios, maintaining consistent casing improves readability and maintenance.\n- Commonly referenced in metadata APIs, deployment tools, automation scripts, and"},"isInternalUseOnly":{"type":"boolean","description":"isInternalUseOnly indicates whether the document is intended exclusively for internal use within the organization and must not be shared with external parties. This flag is critical for protecting sensitive, proprietary, or confidential information by restricting access strictly to authorized internal users. It serves as a definitive marker within the document’s metadata to enforce internal visibility policies, prevent accidental or unauthorized external distribution, and support compliance with organizational security standards. Proper handling of this flag ensures that internal communications, strategic plans, financial data, and other sensitive materials remain protected from exposure outside the company.\n\n**Field behavior**\n- When set to true, the document is strictly restricted to internal users and must not be accessible or shared with any external entities.\n- When false or omitted, the document may be accessible to external users, subject to other access control mechanisms.\n- Commonly used to flag documents containing sensitive business strategies, internal communications, proprietary data, or confidential policies.\n- Influences user interface elements and API responses to hide, restrict, or clearly label document visibility accordingly.\n- Changes to this flag can trigger notifications, workflow actions, or audit events to ensure proper handling and compliance.\n- May affect document indexing, search results, and export capabilities to prevent unauthorized dissemination.\n\n**Implementation guidance**\n- Always verify this flag before granting document access in both user interfaces and backend APIs to ensure compliance with internal use policies.\n- Use in conjunction with role-based access controls, group memberships, document classification labels, and data sensitivity tags to enforce comprehensive security.\n- Carefully manage updates to this flag to prevent inadvertent exposure of confidential information; consider implementing approval workflows and multi-level reviews for changes.\n- Implement audit logging for all changes to this flag to support compliance, traceability, and forensic investigations.\n- Ensure all integrated systems, third-party applications, and data export mechanisms respect this flag to maintain consistent access restrictions.\n- Provide clear UI indicators, warnings, or access restrictions to users when accessing documents marked as internal use only.\n- Regularly review and validate the accuracy of this flag to maintain up-to-date security postures.\n\n**Examples**\n- true: A confidential internal report on upcoming product launches accessible only to employees.\n- false: A publicly available user manual or product datasheet intended for customers and partners.\n- true: Internal HR policies and procedures documents restricted to company staff.\n- false: Press releases or marketing materials distributed externally.\n- true: Financial forecasts and budget plans shared exclusively within the finance department.\n- true: Internal audit findings and compliance reports"},"isPublic":{"type":"boolean","description":"Indicates whether the document is publicly accessible to all users or restricted exclusively to authorized users within the system. This property governs the document's visibility and access permissions, determining if authentication is required to view its content. When set to true, the document becomes openly available without any access controls, allowing anyone—including unauthenticated users—to access it freely. Conversely, setting this flag to false enforces security measures that limit access based on user roles, permissions, and organizational sharing rules, ensuring that only authorized personnel can view the document. This setting directly impacts how the document appears in search results, sharing interfaces, and external indexing services, making it a critical factor in managing data privacy, compliance, and information governance.\n\n**Field behavior**\n- When true, the document is accessible by anyone, including unauthenticated users.\n- When false, access is restricted to users with explicit permissions or roles.\n- Directly influences the document’s visibility in search results, sharing interfaces, and external indexing.\n- Changes to this property take immediate effect, altering who can view the document.\n- Modifications may trigger updates in access control enforcement mechanisms.\n\n**Implementation guidance**\n- Use this property to clearly define and enforce document sharing and visibility policies.\n- Always set to false for sensitive, confidential, or proprietary documents to prevent unauthorized access.\n- Implement strict validation to ensure only users with appropriate administrative rights can modify this property.\n- Consider organizational compliance requirements, privacy regulations, and data security standards before making documents public.\n- Regularly monitor and audit changes to this property to maintain security oversight and compliance.\n- Plan changes carefully, as toggling this setting can have immediate and broad impact on document accessibility.\n\n**Examples**\n- true: A publicly available product catalog, marketing brochure, or press release intended for external audiences.\n- false: Internal project plans, financial reports, HR documents, or any content restricted to company personnel.\n\n**Important notes**\n- Making a document public may expose it to indexing by search engines if accessible via public URLs.\n- Changes to this property have immediate effect on document accessibility; ensure appropriate change management.\n- Ensure alignment with corporate governance, legal requirements, and data protection policies when toggling public access.\n- Public documents should be reviewed periodically to confirm that their visibility remains appropriate.\n- Unauthorized changes to this property can lead to data leaks or compliance violations.\n\n**Dependency chain**\n- Access control depends on user roles, profiles, and permission sets configured within the system.\n- Interacts"}},"description":"The 'document' property represents a digital file or record associated with a Salesforce entity, encompassing a wide range of file types such as attachments, reports, images, contracts, spreadsheets, and other files stored within the Salesforce platform. It acts as a comprehensive container that includes both the file's binary content—typically encoded in base64—and its associated metadata, such as file name, description, MIME type, and tags. This property enables seamless integration, retrieval, upload, update, and management of files within Salesforce-related workflows and processes. By linking relevant documents directly to Salesforce records, it enhances data organization, accessibility, collaboration, and compliance across various business functions. Additionally, it supports versioning, audit trails, and can trigger automation processes like workflows, validation rules, and triggers, ensuring robust document lifecycle management within the Salesforce ecosystem.\n\n**Field behavior**\n- Contains both the binary content (usually base64-encoded) and descriptive metadata of a file linked to a Salesforce record.\n- Supports a diverse array of file types including PDFs, images, reports, contracts, spreadsheets, and other common document formats.\n- Facilitates operations such as uploading new files, retrieving existing files, updating metadata, deleting documents, and managing versions within Salesforce.\n- May be required or optional depending on the specific API operation, object context, or organizational business rules.\n- Changes to the document content or metadata can trigger Salesforce workflows, triggers, validation rules, or automation processes.\n- Maintains versioning and audit trails when integrated with Salesforce Content or Files features.\n- Respects Salesforce security and sharing settings to control access and protect sensitive information.\n\n**Implementation guidance**\n- Encode the document content in base64 format when transmitting via API to ensure data integrity and compatibility.\n- Validate file size and type against Salesforce platform limits and organizational policies before upload to prevent errors.\n- Provide comprehensive metadata including file name, description, content type (MIME type), file extension, and relevant tags or categories to facilitate search, classification, and governance.\n- Manage user permissions and access controls in accordance with Salesforce security and sharing settings to safeguard sensitive data.\n- Use appropriate Salesforce API endpoints (e.g., /sobjects/Document, Attachment, ContentVersion) and HTTP methods aligned with the document type and intended operation.\n- Implement robust error handling to manage responses related to file size limits, unsupported formats, permission denials, or network issues.\n- Leverage Salesforce Content or Files features for enhanced document management capabilities such as version control, sharing,"},"attachment":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the attachment within the Salesforce system, serving as the immutable primary key for the attachment object. This ID is essential for accurately referencing, retrieving, updating, or deleting the specific attachment record through API calls. It must conform to Salesforce's standardized ID format, which can be either a 15-character case-sensitive or an 18-character case-insensitive alphanumeric string, with the 18-character version preferred for integrations due to its case insensitivity and reduced risk of errors. While the ID itself does not contain metadata about the attachment's content, it is crucial for linking the attachment to related Salesforce records and ensuring precise operations on the attachment resource across the platform.\n\n**Field behavior**\n- Acts as the unique and immutable primary key for the attachment record.\n- Required for all API operations involving the attachment, including retrieval, updates, and deletions.\n- Ensures consistent and precise identification of the attachment in all Salesforce API interactions.\n- Remains constant and unchangeable throughout the lifecycle of the attachment once created.\n\n**Implementation guidance**\n- Must strictly adhere to Salesforce ID formats: either 15-character case-sensitive or 18-character case-insensitive alphanumeric strings.\n- Should be dynamically obtained from Salesforce API responses when creating or querying attachments to ensure accuracy.\n- Avoid hardcoding or manually entering IDs to prevent data integrity issues and API errors.\n- Validate the ID format programmatically before use to ensure compatibility with Salesforce API requirements.\n- Prefer using the 18-character ID in integration scenarios to minimize case sensitivity issues.\n\n**Examples**\n- \"00P1t00000XyzAbCDE\"\n- \"00P1t00000XyzAbCDEAAA\"\n- \"00P3t00000LmNOpQR1\"\n\n**Important notes**\n- The 15-character ID is case-sensitive, whereas the 18-character ID is case-insensitive and recommended for integration scenarios.\n- Using an incorrect, malformed, or outdated ID will result in API errors or failed operations.\n- The ID does not convey any descriptive information about the attachment’s content or metadata.\n- Always retrieve and use the ID directly from Salesforce API responses to ensure validity and prevent errors.\n- The ID is unique within the Salesforce org and cannot be reused or duplicated.\n\n**Dependency chain**\n- Generated automatically by Salesforce upon creation of the attachment record.\n- Utilized by other API calls and properties that require precise attachment identification.\n- Linked to parent or related Salesforce objects, integrating the attachment"},"name":{"type":"string","description":"The name property represents the filename of the attachment within Salesforce and serves as the primary identifier for the attachment. It is crucial for enabling easy recognition, retrieval, and management of attachments both in the user interface and through API operations. This property typically includes the file extension, which specifies the file type and ensures proper handling, display, and processing of the file across different systems and platforms. The filename should be meaningful, descriptive, and concise to provide clarity and ease of use, especially when managing multiple attachments linked to a single parent record. Adhering to proper naming conventions helps maintain organization, improves searchability, and prevents conflicts or ambiguities that could arise from duplicate or unclear filenames.\n\n**Field behavior**\n- Stores the filename of the attachment as a string value.\n- Acts as the key identifier for the attachment within the Salesforce environment.\n- Displayed prominently in the Salesforce UI and API responses when viewing or managing attachments.\n- Should be unique within the context of the parent record to avoid confusion.\n- Includes file extensions (e.g., .pdf, .jpg, .docx) to denote the file type and facilitate correct processing.\n- Case sensitivity may apply depending on the context, affecting retrieval and display.\n- Used in URL generation and API endpoints, impacting accessibility and referencing.\n\n**Implementation guidance**\n- Validate filenames to exclude prohibited characters (such as \\ / : * ? \" < > |) and other special characters that could cause errors or security vulnerabilities.\n- Ensure the filename length does not exceed Salesforce’s maximum limit, typically 255 characters.\n- Always include the appropriate and accurate file extension to enable correct file type recognition.\n- Use clear, descriptive, and consistent naming conventions that reflect the content or purpose of the attachment.\n- Perform validation checks before upload to prevent failures or inconsistencies.\n- Avoid renaming attachments post-upload to maintain integrity of references and links.\n- Consider localization and encoding standards to support international characters where applicable.\n\n**Examples**\n- \"contract_agreement.pdf\"\n- \"profile_picture.jpg\"\n- \"sales_report_q1_2024.xlsx\"\n- \"meeting_notes.txt\"\n- \"project_plan_v2.docx\"\n\n**Important notes**\n- Filenames are case-sensitive in certain contexts, which can impact retrieval and display.\n- Avoid special characters that may interfere with URLs, file systems, or API processing.\n- Renaming an attachment after upload can disrupt existing references or links to the file.\n- Consistency between the filename and the actual content enhances user"},"parentId":{"type":"string","description":"The unique identifier of the parent Salesforce record to which the attachment is linked. This field establishes a direct and essential relationship between the attachment and its parent object, such as an Account, Contact, Opportunity, or any other Salesforce record type that supports attachments. It ensures that attachments are correctly associated within the Salesforce data model, enabling accurate retrieval, management, and organization of attachments in relation to their parent records. By linking attachments to their parent records, this field facilitates data integrity, efficient querying, and seamless navigation between related records. It is a mandatory field when creating or updating attachments and must reference a valid, existing Salesforce record ID. Proper use of this field guarantees referential integrity, enforces access controls based on parent record permissions, and impacts the visibility and lifecycle of the attachment within the Salesforce environment.\n\n**Field behavior**\n- Represents the Salesforce record ID that the attachment is associated with.\n- Mandatory when creating or updating an attachment to specify its parent record.\n- Must correspond to an existing Salesforce record ID of a valid object type that supports attachments.\n- Enables querying, filtering, and reporting of attachments based on their parent record.\n- Enforces referential integrity between attachments and parent records.\n- Changes to the parent record, including deletion or modification, can affect the visibility, accessibility, and lifecycle of the attachment.\n- Access to the attachment is governed by the permissions and sharing settings of the parent record.\n\n**Implementation guidance**\n- Ensure the parentId is a valid 15- or 18-character Salesforce record ID, with preference for 18-character IDs to avoid case sensitivity issues.\n- Verify the existence and active status of the parent record before assigning the parentId.\n- Confirm that the parent object type supports attachments to prevent errors during attachment operations.\n- Handle user permissions and sharing settings to ensure appropriate access rights to the parent record and its attachments.\n- Use Salesforce APIs or metadata services to validate the parentId format, object type, and existence.\n- Implement robust error handling for invalid, non-existent, or unauthorized parentId values during attachment creation or updates.\n- Consider the impact of parent record lifecycle events (such as deletion or archiving) on associated attachments.\n\n**Examples**\n- \"0011a000003XYZ123\" (Account record ID)\n- \"0031a000004ABC456\" (Contact record ID)\n- \"0061a000005DEF789\" (Opportunity record ID)\n\n**Important notes**\n- The parentId is critical for maintaining data integrity and ensuring"},"contentType":{"type":"string","description":"The MIME type of the attachment content, specifying the exact format and nature of the file (e.g., \"image/png\", \"application/pdf\", \"text/plain\"). This information is critical for correctly interpreting, processing, and displaying the attachment across different systems and platforms. It enables clients and servers to determine how to handle the file, whether to render it inline, prompt for download, or apply specific processing rules. Accurate contentType values facilitate validation, security scanning, and compatibility checks, ensuring that the attachment is managed appropriately throughout its lifecycle. Properly defined contentType values improve interoperability, user experience, and system security by enabling precise content handling and reducing the risk of errors or vulnerabilities.\n\n**Field behavior**\n- Defines the media type of the attachment to guide processing, rendering, and handling decisions.\n- Used by clients, servers, and intermediaries to determine appropriate actions such as inline display, download prompts, or specialized processing.\n- Supports validation of file format during upload, storage, and download operations to ensure data integrity.\n- Influences security measures including filtering, scanning for malware, and enforcing content policies.\n- Affects user interface elements like preview generation, icon selection, and content categorization.\n- May be used in logging, auditing, and analytics to track content types handled by the system.\n\n**Implementation guidance**\n- Must strictly adhere to the standard MIME type format as defined by IANA (e.g., \"type/subtype\").\n- Should accurately reflect the actual content format to prevent misinterpretation or processing errors.\n- Use generic types such as \"application/octet-stream\" only when the specific type cannot be determined.\n- Include optional parameters (e.g., charset, boundary) when relevant to fully describe the content.\n- Ensure the contentType is set or updated whenever the attachment content changes to maintain consistency.\n- Validate the contentType against the file extension and actual content where feasible to enhance reliability.\n- Consider normalizing the contentType to lowercase to maintain consistency across systems.\n- Avoid relying solely on contentType for security decisions; combine with other metadata and content inspection.\n\n**Examples**\n- \"image/jpeg\"\n- \"application/pdf\"\n- \"text/plain; charset=utf-8\"\n- \"application/vnd.ms-excel\"\n- \"audio/mpeg\"\n- \"video/mp4\"\n- \"application/json\"\n- \"multipart/form-data; boundary=something\"\n- \"application/zip\"\n\n**Important notes**\n- An incorrect, missing, or misleading contentType can lead to improper"},"isPrivate":{"type":"boolean","description":"Indicates whether the attachment is designated as private, restricting its visibility exclusively to users who have been explicitly authorized to access it. This setting is critical for maintaining confidentiality and ensuring that sensitive or proprietary information contained within attachments is accessible only to appropriate personnel. When marked as private, the attachment’s visibility is constrained based on the user’s permissions and the sharing settings of the parent record, thereby reinforcing data security within the Salesforce environment. This field directly controls the scope of access to the attachment, dynamically influencing who can view or interact with it, and must be managed carefully to align with organizational privacy policies and compliance requirements.\n\n**Field behavior**\n- When set to true, the attachment is accessible only to users with explicit authorization, effectively hiding it from unauthorized users.\n- When set to false or omitted, the attachment inherits the visibility of its parent record and is accessible to all users who have access to that record.\n- Directly affects the attachment’s visibility scope and access control within Salesforce.\n- Changes to this field immediately impact user access permissions for the attachment.\n- Does not override broader organizational sharing settings but further restricts access.\n- The privacy setting applies in conjunction with existing record-level and object-level security controls.\n\n**Implementation guidance**\n- Use this field to enforce strict data privacy and protect sensitive attachments in compliance with organizational policies and regulatory requirements.\n- Ensure that user roles, profiles, and sharing rules are properly configured to respect this privacy setting.\n- Validate that the value is a boolean (true or false) to prevent data inconsistencies.\n- Evaluate the implications on existing sharing rules and record-level security before modifying this field.\n- Consider implementing audit logging to track changes to this privacy setting for security and compliance purposes.\n- Test changes in a non-production environment to understand the impact on user access before deploying to production.\n- Communicate changes to relevant stakeholders to avoid confusion or unintended access issues.\n\n**Examples**\n- `true` — The attachment is private and visible only to users explicitly authorized to access it.\n- `false` — The attachment is public within the context of the parent record and visible to all users who have access to that record.\n\n**Important notes**\n- Setting this field to true does not grant access to users who lack permission to view the parent record; it only further restricts access.\n- Modifying this setting can significantly affect user access and should be managed carefully to avoid unintended data exposure or access denial.\n- Some Salesforce editions or configurations may have limitations or specific behaviors regarding attachment privacy"},"description":{"type":"string","description":"A textual description providing additional context, detailed information, and clarifying the content, purpose, or relevance of the attachment within Salesforce. This field enhances user understanding and facilitates easier identification, organization, and management of attachments by serving as descriptive metadata that complements the attachment’s core data without containing the attachment content itself. It supports plain text with special characters, punctuation, and full Unicode, enabling expressive, internationalized, and accessible descriptions. Typically displayed in user interfaces alongside attachment details, this description also plays a crucial role in search, filtering, sorting, and reporting operations, improving overall data discoverability and usability.\n\n**Field behavior**\n- Optional and editable field used to describe the attachment’s nature or relevance.\n- Supports plain text input including special characters, punctuation, and Unicode.\n- Displayed prominently in user interfaces where attachment details appear.\n- Utilized in search, filtering, sorting, and reporting functionalities to enhance data retrieval.\n- Does not contain or affect the actual attachment content or file storage.\n\n**Implementation guidance**\n- Encourage concise yet informative descriptions to aid quick identification.\n- Validate and sanitize input to prevent injection attacks and ensure data integrity.\n- Support full Unicode to accommodate international characters and symbols.\n- Enforce a reasonable maximum length (commonly 255 characters) to maintain performance and UI consistency.\n- Avoid inclusion of sensitive or confidential information unless appropriate security controls are applied.\n- Implement trimming or sanitization routines to maintain clean and consistent data.\n\n**Examples**\n- \"Quarterly financial report for Q1 2024.\"\n- \"Signed contract agreement with client XYZ.\"\n- \"Product brochure for the new model launch.\"\n- \"Meeting notes and action items from April 15, 2024.\"\n- \"Technical specification document for project Alpha.\"\n\n**Important notes**\n- This field contains descriptive metadata only, not the attachment content.\n- Sensitive information should be excluded or protected according to security policies.\n- Changes to this field do not impact the attachment file or its storage.\n- Consistent and accurate descriptions improve attachment management and user experience.\n- Plays a key role in enhancing searchability and filtering within Salesforce.\n\n**Dependency chain**\n- Directly associated with the attachment entity in Salesforce.\n- Often used in conjunction with other metadata fields such as attachment name, type, owner, size, and creation date.\n- Supports comprehensive context when combined with related records and metadata.\n\n**Technical details**\n- Data type: string.\n- Maximum length: typically 255 characters, configurable"}},"description":"The attachment property represents a file or document linked to a Salesforce record, encompassing both the binary content of the file and its associated metadata. It serves as a versatile container for a wide variety of file types—including documents, images, PDFs, spreadsheets, and emails—allowing users to enrich Salesforce records such as emails, cases, opportunities, or any standard or custom objects with supplementary information or evidence. This property supports comprehensive lifecycle management, enabling operations such as uploading new files, retrieving existing attachments, updating file details, and deleting attachments as needed. By maintaining a direct association with the parent Salesforce record through identifiers like ParentId, it ensures accurate linkage and seamless integration within the Salesforce ecosystem. The attachment property facilitates enhanced collaboration, documentation, and data completeness by providing a centralized and manageable way to handle files related to Salesforce data.\n\n**Field behavior**\n- Stores the binary content or a reference to a file linked to a specific Salesforce record.\n- Supports multiple file formats including but not limited to documents, images, PDFs, spreadsheets, and email files.\n- Enables attaching additional context, documentation, or evidence to Salesforce objects.\n- Allows full lifecycle management: create, read, update, and delete operations on attachments.\n- Contains metadata such as file name, MIME content type, file size, and the ID of the parent Salesforce record.\n- Reflects changes immediately in the associated Salesforce record’s file attachments.\n- Maintains a direct association with the parent record through the ParentId field to ensure accurate linkage.\n- Supports integration with Salesforce Files (ContentVersion and ContentDocument) for advanced file management features.\n\n**Implementation guidance**\n- Encode file content using Base64 encoding when transmitting via Salesforce APIs to ensure data integrity.\n- Validate file size against Salesforce limits (commonly up to 25 MB per attachment) and confirm supported file types before upload.\n- Use the ParentId field to correctly associate the attachment with the intended Salesforce record.\n- Implement robust permission checks to ensure only authorized users can upload, view, or modify attachments.\n- Consider leveraging Salesforce Files (ContentVersion and ContentDocument objects) for enhanced file management capabilities, versioning, and sharing features, especially for new implementations.\n- Handle error responses gracefully, including those related to file size limits, unsupported formats, or permission issues.\n- Ensure metadata fields such as file name and content type are accurately populated to facilitate proper handling and display.\n- When updating attachments, confirm that the new content or metadata aligns with organizational policies and compliance requirements.\n- Optimize performance"},"contentVersion":{"type":"object","properties":{"contentDocumentId":{"type":"string","description":"contentDocumentId is the unique identifier for the parent Content Document associated with a specific Content Version in Salesforce. It serves as the fundamental reference that links all versions of the same document, enabling comprehensive version control and efficient document management within the Salesforce ecosystem. This ID is immutable once the Content Version is created, ensuring a stable and consistent connection across all iterations of the document. By maintaining this linkage, Salesforce facilitates seamless navigation between individual document versions and their overarching document entity, supporting streamlined retrieval, updates, collaboration, and organizational workflows. This field is automatically assigned by Salesforce during the creation of a Content Version and is critical for maintaining the integrity and traceability of document versions throughout their lifecycle.\n\n**Field behavior**\n- Represents the unique ID of the parent Content Document to which the Content Version belongs.\n- Acts as the primary link connecting multiple versions of the same document.\n- Immutable after the Content Version record is created, preserving data integrity.\n- Enables navigation from a specific Content Version to its parent Content Document.\n- Automatically assigned and managed by Salesforce during Content Version creation.\n- Essential for maintaining consistent version histories and document relationships.\n\n**Implementation guidance**\n- Use this ID to reference or retrieve the entire document across all its versions.\n- Verify that the ID corresponds to an existing Content Document before usage.\n- Utilize in SOQL queries and API calls to associate Content Versions with their parent document.\n- Avoid modifying this field directly; it is managed internally by Salesforce.\n- Employ this ID to support document lifecycle management, version tracking, and collaboration features.\n- Leverage this field to ensure accurate linkage in integrations and custom development.\n\n**Examples**\n- \"0691t00000XXXXXXAAA\"\n- \"0692a00000YYYYYYBBB\"\n- \"0693b00000ZZZZZZCCC\"\n\n**Important notes**\n- Distinct from the Content Version ID, which identifies a specific version rather than the entire document.\n- Mandatory for any valid Content Version record to ensure proper document association.\n- Cannot be null or empty for Content Version records.\n- Critical for preserving document integrity, version tracking, and enabling collaborative document management.\n- Changes to this ID are not permitted post-creation to avoid breaking version linkages.\n- Plays a key role in Salesforce’s document sharing, access control, and audit capabilities.\n\n**Dependency chain**\n- Requires the existence of a valid Content Document record in Salesforce.\n- Directly related to Content Version records representing different iterations of the same document.\n- Integral"},"title":{"type":"string","description":"The title of the content version serves as the primary name or label assigned to this specific iteration of content within Salesforce. It acts as a key identifier that enables users to quickly recognize, differentiate, and manage various versions of content. The title typically summarizes the subject matter, purpose, or significant details about the content, making it easier to locate and understand at a glance. It should be concise yet sufficiently descriptive to effectively convey the essence and context of the content version. Well-crafted titles improve navigation, searchability, and version tracking within content management workflows, enhancing overall user experience and operational efficiency.\n\n**Field behavior**\n- Functions as the main identifier or label for the content version in user interfaces, listings, and reports.\n- Enables users to distinguish between different versions of the same content efficiently.\n- Reflects the content’s subject, purpose, key attributes, or version status.\n- Should be concise but informative to facilitate quick recognition and comprehension.\n- Commonly displayed in search results, filters, version histories, and content libraries.\n- Supports sorting and filtering operations to streamline content management.\n\n**Implementation guidance**\n- Use clear, descriptive titles that accurately represent the content version’s focus or changes.\n- Avoid special characters or formatting that could cause display, parsing, or processing issues.\n- Keep titles within a reasonable length (typically up to 255 characters) to ensure readability across various UI components and devices.\n- Update the title appropriately when creating new versions to reflect significant updates or revisions.\n- Consider including version numbers, dates, or status indicators to enhance clarity and version tracking.\n- Follow organizational naming conventions and standards to maintain consistency.\n\n**Examples**\n- \"Q2 Financial Report 2024\"\n- \"Product Launch Presentation v3\"\n- \"Employee Handbook - Updated March 2024\"\n- \"Marketing Strategy Overview\"\n- \"Website Redesign Proposal - Final Draft\"\n\n**Important notes**\n- Titles do not need to be globally unique but should be meaningful and contextually relevant.\n- Changing the title does not modify the underlying content data or its integrity.\n- Titles are frequently used in search, filtering, sorting, and reporting operations within Salesforce.\n- Consistent naming conventions and formatting improve content management efficiency and user experience.\n- Consider organizational standards or guidelines when defining title formats.\n- Titles should avoid ambiguity to prevent confusion between similar content versions.\n\n**Dependency chain**\n- Part of the ContentVersion object within Salesforce’s content management framework.\n- Often associated with other metadata fields such as description"},"pathOnClient":{"type":"string","description":"The pathOnClient property specifies the original file path of the content as it exists on the client machine before being uploaded to Salesforce. This path serves as a precise reference to the source location of the file on the user's local system, facilitating identification, auditing, and tracking of the file's origin. It captures either the full absolute path or a relative path depending on the client environment and upload context, accurately reflecting the exact location from which the file was sourced. This information is primarily used for informational, diagnostic, and user interface purposes within Salesforce and does not influence how the file is stored, accessed, or secured in the system.\n\n**Field behavior**\n- Represents the full absolute or relative file path on the client device where the file was stored prior to upload.\n- Automatically populated during file upload processes when the client environment provides this information.\n- Used mainly for reference, auditing, and user interface display to provide context about the file’s origin.\n- May be empty, null, or omitted if the file was not uploaded from a client device or if the path information is unavailable or restricted.\n- Does not influence file storage, retrieval, or access permissions within Salesforce.\n- Retains the original formatting and syntax as provided by the client system at upload time.\n\n**Implementation guidance**\n- Capture the file path exactly as provided by the client system at the time of upload, preserving the original format and case sensitivity.\n- Handle differences in file path syntax across operating systems (e.g., backslashes for Windows, forward slashes for Unix/Linux/macOS).\n- Treat this property strictly as metadata for informational purposes; avoid using it for security, access control, or file retrieval logic.\n- Sanitize or mask sensitive directory or user information when displaying or logging this path to protect user privacy and comply with data protection regulations.\n- Ensure compliance with relevant privacy laws and organizational policies when storing or exposing client file paths.\n- Consider truncating, normalizing, or encoding paths if necessary to conform to platform length limits, display constraints, or to prevent injection vulnerabilities.\n- Provide clear documentation to users and administrators about the non-security nature of this property to prevent misuse.\n\n**Examples**\n- \"C:\\\\Users\\\\JohnDoe\\\\Documents\\\\Report.pdf\"\n- \"/home/janedoe/projects/presentation.pptx\"\n- \"Documents/Invoices/Invoice123.pdf\"\n- \"Desktop/Project Files/DesignMockup.png\"\n- \"../relative/path/to/file.txt\"\n\n**Important notes**\n- This property does not"},"tagCsv":{"type":"string","description":"A comma-separated string representing the tags associated with the content version in Salesforce. This field enables the assignment of multiple descriptive tags within a single string, facilitating efficient categorization, organization, and enhanced searchability of the content. Tags serve as metadata labels that help users filter, locate, and manage content versions effectively across the platform by grouping related items and supporting advanced search and filtering capabilities. Proper use of this field improves content discoverability, streamlines content management workflows, and supports dynamic content classification.\n\n**Field behavior**\n- Accepts multiple tags separated by commas, typically without spaces, though trimming whitespace is recommended.\n- Tags are used to categorize, organize, and improve the discoverability of content versions.\n- Tags can be added, updated, or removed by modifying the entire string value.\n- Duplicate tags within the string should be avoided to maintain clarity and effectiveness.\n- Tags are generally treated as case-insensitive for search purposes but stored exactly as entered.\n- Changes to this field immediately affect how content versions are indexed and retrieved in search operations.\n\n**Implementation guidance**\n- Ensure individual tags do not contain commas to prevent parsing errors.\n- Validate the tagCsv string for invalid characters, excessive length, and proper formatting.\n- When updating tags, replace the entire string to accurately reflect the current tag set.\n- Trim leading and trailing whitespace from each tag before processing or storage.\n- Utilize this field to enhance search filters, categorization, and content management workflows.\n- Implement application-level checks to prevent duplicate or meaningless tags.\n- Consider standardizing tag formats (e.g., lowercase) to maintain consistency across records.\n\n**Examples**\n- \"marketing,Q2,presentation\"\n- \"urgent,clientA,confidential\"\n- \"releaseNotes,v1.2,approved\"\n- \"finance,yearEnd,reviewed\"\n\n**Important notes**\n- The maximum length of the tagCsv string is subject to Salesforce field size limitations.\n- Tags should be meaningful, consistent, and standardized to maximize their utility.\n- Modifications to tags can impact content visibility and access if used in filtering or permission logic.\n- This field does not enforce uniqueness of tags; application-level logic should handle duplicate prevention.\n- Tags are stored as-is; any normalization or case conversion must be handled externally.\n- Improper formatting or invalid characters may cause errors or unexpected behavior in search and filtering.\n\n**Dependency chain**\n- Closely related to contentVersion metadata and search/filtering functionality.\n- May integrate with Salesforce"},"contentLocation":{"type":"string","description":"contentLocation specifies the precise storage location of the content file associated with the Salesforce ContentVersion record. It identifies where the actual file data resides—whether within Salesforce's internal storage infrastructure, an external content management system, or a linked external repository—and governs how the content is accessed, retrieved, and managed throughout its lifecycle. This field plays a crucial role in content delivery, access permissions, and integration workflows by clearly indicating the source location of the file data. Properly setting and interpreting contentLocation ensures that content is handled correctly according to its storage context, enabling seamless access when stored internally or facilitating robust integration and synchronization with third-party external systems.\n\n**Field behavior**\n- Determines the origin and storage context of the content file linked to the ContentVersion record.\n- Influences the mechanisms and protocols used for accessing, retrieving, and managing the content.\n- Typically assigned automatically by Salesforce based on the upload method or integration configuration.\n- Common values include 'S' for Salesforce internal storage, 'E' for external storage systems, and 'L' for linked external locations.\n- Often read-only to preserve data integrity and prevent unauthorized or unintended modifications.\n- Alterations to this field can impact content visibility, sharing settings, and delivery workflows.\n\n**Implementation guidance**\n- Utilize this field to programmatically distinguish between internally stored content and externally managed files.\n- When integrating with external repositories, ensure contentLocation accurately reflects the external storage to enable correct content handling and retrieval.\n- Avoid manual updates unless supported by Salesforce documentation and accompanied by a comprehensive understanding of the consequences.\n- Validate the contentLocation value prior to processing or delivering content to guarantee proper routing and access control.\n- Confirm that external storage systems are correctly configured, authenticated, and authorized to maintain uninterrupted access.\n- Monitor this field closely during content migration or integration activities to prevent data inconsistencies or access disruptions.\n\n**Examples**\n- 'S' — Content physically stored within Salesforce’s internal storage infrastructure.\n- 'E' — Content residing in an external system integrated with Salesforce, such as a third-party content repository.\n- 'L' — Content located in a linked external location, representing specialized or less common storage configurations.\n\n**Important notes**\n- Manual modifications to contentLocation can lead to broken links, access failures, or data inconsistencies.\n- Salesforce generally manages this field automatically to uphold consistency and reliability.\n- The contentLocation value directly influences content delivery methods and access control policies.\n- External content locations require proper configuration, authentication, and permissions to function correctly"}},"description":"The contentVersion property serves as the unique identifier for a specific iteration of a content item within the Salesforce platform's content management system. It enables precise tracking, retrieval, and management of individual versions or revisions of documents, files, or digital assets. Each contentVersion corresponds to a distinct, immutable state of the content at a given point in time, facilitating robust version control, content integrity, and auditability. When a content item is created or modified, Salesforce automatically generates a new contentVersion to capture those changes, allowing users and systems to access, compare, or revert to previous versions as necessary. This property is fundamental for workflows involving content updates, approvals, compliance tracking, and historical content analysis.\n\n**Field behavior**\n- Uniquely identifies a single, immutable version of a content item within Salesforce.\n- Differentiates between multiple revisions or updates of the same content asset.\n- Automatically created or incremented by Salesforce upon content creation or modification.\n- Used in API operations to retrieve, reference, update metadata, or manage specific content versions.\n- Remains constant once assigned; any content changes result in a new contentVersion record.\n- Supports audit trails by preserving historical versions of content.\n\n**Implementation guidance**\n- Ensure synchronization of contentVersion values with Salesforce to maintain consistency across integrated systems.\n- Utilize this property for version-specific operations such as downloading content, updating metadata, or deleting particular versions.\n- Validate contentVersion identifiers against Salesforce API standards to avoid errors or mismatches.\n- Implement comprehensive error handling for scenarios where contentVersion records are missing, deprecated, or inaccessible due to permission restrictions.\n- Consider access control differences at the version level, as visibility and permissions may vary between versions.\n- Leverage contentVersion in workflows requiring version comparison, rollback, or approval processes.\n\n**Examples**\n- \"0681t000000XyzA\" (initial version of a document)\n- \"0681t000000XyzB\" (a subsequent, updated version reflecting content changes)\n- \"0681t000000XyzC\" (a further revised version incorporating additional edits)\n\n**Important notes**\n- contentVersion is distinct from contentDocumentId; contentVersion identifies a specific version, whereas contentDocumentId refers to the overall document entity.\n- Proper versioning is critical for maintaining content integrity, supporting audit trails, and ensuring regulatory compliance.\n- Access permissions and visibility can differ between content versions, potentially affecting user access and operations.\n- The contentVersion ID typically begins with the prefix"}}},"File":{"type":"object","description":"**CRITICAL: This object is REQUIRED for all file-based import adaptor types.**\n\n**When to include this object**\n\n✅ **MUST SET** when `adaptorType` is one of:\n- `S3Import`\n- `FTPImport`\n- `AS2Import`\n\n❌ **DO NOT SET** for non-file-based imports like:\n- `SalesforceImport`\n- `NetSuiteImport`\n- `HTTPImport`\n- `MongodbImport`\n- `RDBMSImport`\n\n**Minimum required fields**\n\nFor most file imports, you need at minimum:\n- `fileName`: The output file name (supports Handlebars like `{{timestamp}}`)\n- `type`: The file format (json, csv, xml, xlsx)\n- `skipAggregation`: Usually `false` for standard imports\n\n**Example (S3 Import)**\n\n```json\n{\n  \"file\": {\n    \"fileName\": \"customers-{{timestamp}}.json\",\n    \"skipAggregation\": false,\n    \"type\": \"json\"\n  },\n  \"s3\": {\n    \"region\": \"us-east-1\",\n    \"bucket\": \"my-bucket\",\n    \"fileKey\": \"customers-{{timestamp}}.json\"\n  },\n  \"adaptorType\": \"S3Import\"\n}\n```","properties":{"fileName":{"type":"string","description":"**REQUIRED for file-based imports.**\n\nThe name of the file to be created/written. Supports Handlebars expressions for dynamic naming.\n\n**Default behavior**\n- When the user does not specify a file naming convention, **always default to timestamped filenames** using `{{timestamp}}` (e.g., `\"items-{{timestamp}}.csv\"`). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Common patterns**\n- `\"data-{{timestamp}}.json\"` - Timestamped JSON file (DEFAULT — use this pattern when not specified)\n- `\"export-{{date}}.csv\"` - Date-stamped CSV file\n- `\"{{recordType}}-backup.xml\"` - Dynamic record type naming\n\n**Examples**\n- `\"customers-{{timestamp}}.json\"`\n- `\"orders-export.csv\"`\n- `\"inventory-{{date}}.xlsx\"`\n\n**Important**\n- For S3 imports, this should typically match the `s3.fileKey` value\n- For FTP imports, this should typically match the `ftp.fileName` value"},"skipAggregation":{"type":"boolean","description":"Controls whether records are aggregated into a single file or processed individually.\n\n**Values**\n- `false` (DEFAULT): Records are aggregated into a single output file\n- `true`: Each record is written as a separate file\n\n**When to use**\n- **`false`**: Standard batch imports where all records go into one file\n- **`true`**: When each record needs its own file (e.g., individual documents)\n\n**Default:** `false`","default":false},"type":{"type":"string","enum":["json","csv","xml","xlsx","filedefinition"],"description":"**REQUIRED for file-based imports.**\n\nThe format of the output file.\n\n**Options**\n- `\"json\"`: JSON format (most common for API data)\n- `\"csv\"`: Comma-separated values (tabular data)\n- `\"xml\"`: XML format\n- `\"xlsx\"`: Excel spreadsheet format\n- `\"filedefinition\"`: Custom file definition format\n\n**Selection guidance**\n- Use `\"json\"` for structured/nested data, API payloads\n- Use `\"csv\"` for flat tabular data, spreadsheet-like records\n- Use `\"xml\"` for XML-based integrations, SOAP services\n- Use `\"xlsx\"` for Excel-compatible outputs"},"encoding":{"type":"string","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"],"description":"Character encoding for the output file.\n\n**Default:** `\"utf8\"`\n\n**When to change**\n- `\"win1252\"`: Legacy Windows systems\n- `\"utf-16le\"`: Unicode with BOM requirements\n- `\"gb18030\"`: Chinese character sets\n- `\"shiftjis\"`: Japanese character sets","default":"utf8"},"delete":{"type":"boolean","description":"Whether to delete the source file after successful import.\n\n**Values**\n- `true`: Delete source file after processing\n- `false`: Keep source file\n\n**Default:** `false`","default":false},"compressionFormat":{"type":"string","enum":["gzip","zip"],"description":"Compression format for the output file.\n\n**Options**\n- `\"gzip\"`: GZIP compression (.gz)\n- `\"zip\"`: ZIP compression (.zip)\n\n**When to use**\n- Large files that benefit from compression\n- When target system expects compressed files"},"backupPath":{"type":"string","description":"Path where backup copies of files should be stored.\n\n**Examples**\n- `\"backup/\"` - Relative backup folder\n- `\"/archive/2024/\"` - Absolute backup path"},"purgeInternalBackup":{"type":"boolean","description":"Whether to purge internal backup copies after successful processing.\n\n**Default:** `false`","default":false},"batchSize":{"type":"integer","description":"Number of records to include per batch/file when processing large datasets.\n\n**When to use**\n- Large imports that need to be split into multiple files\n- When target system has file size limitations\n\n**Note**\n- Only applicable when `skipAggregation` is `true`"},"encrypt":{"type":"boolean","description":"Whether to encrypt the output file.\n\n**Values**\n- `true`: Encrypt the file (requires PGP configuration)\n- `false`: No encryption\n\n**Default:** `false`","default":false},"csv":{"type":"object","description":"CSV-specific configuration. Only used when `type` is `\"csv\"`.","properties":{"rowDelimiter":{"type":"string","description":"Character(s) used to separate rows. Default is newline.","default":"\n"},"columnDelimiter":{"type":"string","description":"Character(s) used to separate columns. Default is comma.","default":","},"includeHeader":{"type":"boolean","description":"Whether to include header row with column names.","default":true},"wrapWithQuotes":{"type":"boolean","description":"Whether to wrap field values in quotes.","default":false},"replaceTabWithSpace":{"type":"boolean","description":"Replace tab characters with spaces.","default":false},"replaceNewlineWithSpace":{"type":"boolean","description":"Replace newline characters with spaces within fields.","default":false},"truncateLastRowDelimiter":{"type":"boolean","description":"Remove trailing row delimiter from the file.","default":false}}},"json":{"type":"object","description":"JSON-specific configuration. Only used when `type` is `\"json\"`.","properties":{"resourcePath":{"type":"string","description":"JSONPath expression to locate records within the JSON structure."}}},"xml":{"type":"object","description":"XML-specific configuration. Only used when `type` is `\"xml\"`.","properties":{"resourcePath":{"type":"string","description":"XPath expression to locate records within the XML structure."}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"Reference to the file definition resource."}}}}},"FileSystem":{"type":"object","description":"Configuration for FileSystem imports","properties":{"directoryPath":{"type":"string","description":"Directory path to read files from (required)"}},"required":["directoryPath"]},"AiAgent":{"type":"object","description":"AI Agent configuration used by both AiAgentImport and GuardrailImport (ai_agent type).\n\nConfigures which AI provider and model to use, along with instructions, parameter\ntuning, output format, and available tools. Two providers are supported:\n\n- **openai**: OpenAI models (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)\n- **gemini**: Google Gemini models via LiteLLM proxy\n\nA `_connectionId` is optional (BYOK). If not provided on the parent import,\nplatform-managed credentials are used.\n","properties":{"provider":{"type":"string","enum":["openai","gemini"],"default":"openai","description":"AI provider to use.\n\n- **openai**: Uses OpenAI Responses API. Configure via the `openai` object.\n- **gemini**: Uses Google Gemini via LiteLLM. Configure via `litellm` with\n  Gemini-specific overrides in `litellm._overrides.gemini`.\n"},"openai":{"type":"object","description":"OpenAI-specific configuration. Used when `provider` is \"openai\".\n","properties":{"instructions":{"type":"string","maxLength":51200,"description":"System prompt that defines the AI agent's behavior, goals, and constraints.\nMaximum 50 KB.\n"},"model":{"type":"string","description":"OpenAI model identifier.\n"},"reasoning":{"type":"object","description":"Controls depth of reasoning for complex tasks.\n","properties":{"effort":{"type":"string","enum":["minimal","low","medium","high"],"description":"How much reasoning effort the model should invest"},"summary":{"type":"string","enum":["concise","auto","detailed"],"description":"Level of detail in reasoning summaries"}}},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature. Higher values (e.g. 1.5) produce more creative output,\nlower values (e.g. 0.2) produce more focused and deterministic output.\n"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"maxOutputTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the model's response"},"serviceTier":{"type":"string","enum":["auto","default","priority"],"default":"default","description":"OpenAI service tier. \"priority\" provides higher rate limits and\nlower latency at increased cost.\n"},"output":{"type":"object","description":"Output format configuration","properties":{"format":{"type":"object","description":"Controls the structure of the model's output.\n","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text","description":"Output format type.\n\n- **text**: Free-form text response\n- **json_schema**: Structured JSON conforming to a schema\n- **blob**: Binary data output\n"},"name":{"type":"string","description":"Name for the output format (used with json_schema)"},"strict":{"type":"boolean","default":false,"description":"Whether to enforce strict schema validation on output"},"jsonSchema":{"type":"object","description":"JSON Schema for structured output. Required when `format.type` is \"json_schema\".\n","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalproperties":{"type":"boolean"}}}}},"verbose":{"type":"string","enum":["low","medium","high"],"default":"medium","description":"Level of detail in the model's response"}}},"tools":{"type":"array","description":"Tools available to the AI agent during processing.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["web_search","mcp","image_generation","tool"],"description":"Type of tool.\n\n- **web_search**: Search the web for information\n- **mcp**: Connect to an MCP server for additional tools\n- **image_generation**: Generate images\n- **tool**: Reference a Celigo Tool resource\n"},"webSearch":{"type":"object","description":"Web search configuration (empty object to enable)"},"imageGeneration":{"type":"object","description":"Image generation configuration","properties":{"background":{"type":"string","enum":["transparent","opaque"]},"quality":{"type":"string","enum":["low","medium","high"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"]},"outputFormat":{"type":"string","enum":["png","webp","jpeg"]}}},"mcp":{"type":"object","description":"MCP server tool configuration","properties":{"_mcpConnectionId":{"type":"string","format":"objectId","description":"Connection to the MCP server"},"allowedTools":{"type":"array","description":"Specific tools to allow from the MCP server (all if omitted)","items":{"type":"string"}}}},"tool":{"type":"object","description":"Reference to a Celigo Tool resource.\n","properties":{"_toolId":{"type":"string","format":"objectId","description":"Reference to the Tool resource"},"overrides":{"type":"object","description":"Per-agent overrides for the tool's internal resources"}}}}}}}},"litellm":{"type":"object","description":"LiteLLM proxy configuration. Used when `provider` is \"gemini\".\n\nLiteLLM provides a unified interface to multiple AI providers.\nGemini-specific settings are in `_overrides.gemini`.\n","properties":{"model":{"type":"string","description":"LiteLLM model identifier"},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature"},"maxCompletionTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the response"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"seed":{"type":"number","description":"Random seed for reproducible outputs"},"responseFormat":{"type":"object","description":"Output format configuration","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text"},"name":{"type":"string"},"strict":{"type":"boolean","default":false},"jsonSchema":{"type":"object","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalProperties":{"type":"boolean"}}}}},"_overrides":{"type":"object","description":"Provider-specific overrides","properties":{"gemini":{"type":"object","description":"Gemini-specific configuration overrides.\n","properties":{"systemInstruction":{"type":"string","maxLength":51200,"description":"System instruction for Gemini models. Equivalent to OpenAI's `instructions`.\nMaximum 50 KB.\n"},"tools":{"type":"array","description":"Gemini-specific tools","items":{"type":"object","properties":{"type":{"type":"string","enum":["googleSearch","urlContext","fileSearch","mcp","tool"],"description":"Type of Gemini tool.\n\n- **googleSearch**: Google Search grounding\n- **urlContext**: URL content retrieval\n- **fileSearch**: Search uploaded files\n- **mcp**: Connect to an MCP server\n- **tool**: Reference a Celigo Tool resource\n"},"googleSearch":{"type":"object","description":"Google Search configuration (empty object to enable)"},"urlContext":{"type":"object","description":"URL context configuration (empty object to enable)"},"fileSearch":{"type":"object","properties":{"fileSearchStoreNames":{"type":"array","items":{"type":"string"}}}},"mcp":{"type":"object","properties":{"_mcpConnectionId":{"type":"string","format":"objectId"},"allowedTools":{"type":"array","items":{"type":"string"}}}},"tool":{"type":"object","properties":{"_toolId":{"type":"string","format":"objectId"},"overrides":{"type":"object"}}}}}},"responseModalities":{"type":"array","description":"Response output modalities","items":{"type":"string","enum":["text","image"]},"default":["text"]},"topK":{"type":"number","description":"Top-K sampling parameter for Gemini"},"thinkingConfig":{"type":"object","description":"Controls Gemini's extended thinking capabilities","properties":{"includeThoughts":{"type":"boolean","description":"Whether to include thinking steps in the response"},"thinkingBudget":{"type":"number","minimum":100,"maximum":4000,"description":"Maximum tokens allocated for thinking"},"thinkingLevel":{"type":"string","enum":["minimal","low","medium","high"]}}},"imageConfig":{"type":"object","description":"Gemini image generation configuration","properties":{"aspectRatio":{"type":"string","enum":["1:1","2:3","3:2","3:4","4:3","4:5","5:4","9:16","16:9","21:9"]},"imageSize":{"type":"string","enum":["1K","2K","4k"]}}},"mediaResolution":{"type":"string","enum":["low","medium","high"],"description":"Resolution for media inputs (images, video)"}}}}}}}}},"Guardrail":{"type":"object","description":"Configuration for GuardrailImport adaptor type.\n\nGuardrails are safety and compliance checks that can be applied to data\nflowing through integrations. Three types of guardrails are supported:\n\n- **ai_agent**: Uses an AI model to evaluate data against custom instructions\n- **pii**: Detects and optionally masks personally identifiable information\n- **moderation**: Checks content against moderation categories (e.g. `hate`, `violence`, `harassment`, `sexual`, `self_harm`, `illicit`). Use category names exactly as listed in `moderation.categories` — e.g. `hate` (not \"hate speech\"), `violence` (not \"violent content\").\n\nGuardrail imports do not require a `_connectionId` (unless using BYOK for `ai_agent` type).\n","properties":{"type":{"type":"string","enum":["ai_agent","pii","moderation"],"description":"The type of guardrail to apply.\n\n- **ai_agent**: Evaluate data using an AI model with custom instructions.\n  Requires the `aiAgent` sub-configuration.\n- **pii**: Detect personally identifiable information in data.\n  Requires the `pii` sub-configuration with at least one entity type.\n- **moderation**: Check content for harmful categories.\n  Requires the `moderation` sub-configuration with at least one category.\n"},"confidenceThreshold":{"type":"number","minimum":0,"maximum":1,"default":0.7,"description":"Confidence threshold for guardrail detection (0 to 1).\n\nOnly detections with confidence at or above this threshold will be flagged.\nLower values catch more potential issues but may increase false positives.\n"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"pii":{"type":"object","description":"Configuration for PII (Personally Identifiable Information) detection.\n\nRequired when `guardrail.type` is \"pii\". At least one entity type must be specified.\n","properties":{"entities":{"type":"array","description":"PII entity types to detect in the data.\n\nAt least one entity must be specified when using PII guardrails.\n","items":{"type":"string","enum":["credit_card_number","card_security_code_cvv_cvc","cryptocurrency_wallet_address","date_and_time","email_address","iban_code","bic_swift_bank_identifier_code","ip_address","location","medical_license_number","national_registration_number","persons_name","phone_number","url","us_bank_account_number","us_drivers_license","us_itin","us_passport_number","us_social_security_number","uk_nhs_number","uk_national_insurance_number","spanish_nif","spanish_nie","italian_fiscal_code","italian_drivers_license","italian_vat_code","italian_passport","italian_identity_card","polish_pesel","finnish_personal_identity_code","singapore_nric_fin","singapore_uen","australian_abn","australian_acn","australian_tfn","australian_medicare","indian_pan","indian_aadhaar","indian_vehicle_registration","indian_voter_id","indian_passport","korean_resident_registration_number"]}},"mask":{"type":"boolean","default":false,"description":"Whether to mask detected PII values in the output.\n\nWhen true, detected PII is replaced with masked values.\nWhen false, PII is only flagged without modification.\n"}}},"moderation":{"type":"object","description":"Configuration for content moderation.\n\nRequired when `guardrail.type` is \"moderation\". At least one category must be specified.\n","properties":{"categories":{"type":"array","description":"Content moderation categories to check for.\n\nAt least one category must be specified when using moderation guardrails.\n","items":{"type":"string","enum":["sexual","sexual_minors","hate","hate_threatening","harassment","harassment_threatening","self_harm","self_harm_intent","self_harm_instructions","violence","violence_graphic","illicit","illicit_violent"]}}}}},"required":["type"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}}}},"paths":{"/v1/imports":{"get":{"summary":"List imports","description":"Returns a list of all imports configured in the account.\nIf no imports exist in the account, a 204 response with no body will be returned.\n","operationId":"listImports","tags":["Imports"],"parameters":[{"$ref":"#/components/parameters/Include"},{"$ref":"#/components/parameters/Exclude"}],"responses":{"200":{"description":"Successfully retrieved list of imports","headers":{"Link":{"description":"RFC-5988 pagination links. When more pages remain, includes a `<...>; rel=\"next\"` entry;\nabsent on the final page.\n","schema":{"type":"string"}}},"content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/Response"}}}}},"204":{"description":"No imports exist in the account"},"401":{"$ref":"#/components/responses/401-unauthorized"}}}}}}
````

## Create an import

> Creates a new import configuration that can be used to send data to applications\
> or external destinations.<br>

````json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Request":{"type":"object","description":"Configuration for import","properties":{"_connectionId":{"type":"string","format":"objectId","description":"A unique identifier representing the specific connection instance within the system. This ID is used to track, manage, and reference the connection throughout its lifecycle, ensuring accurate routing and association of messages or actions to the correct connection context.\n\n**Field behavior**\n- Uniquely identifies a single connection session.\n- Remains consistent for the duration of the connection.\n- Used to correlate requests, responses, and events related to the connection.\n- Typically generated by the system upon connection establishment.\n\n**Implementation guidance**\n- Ensure the connection ID is globally unique within the scope of the application.\n- Use a format that supports easy storage and retrieval, such as UUID or a similar unique string.\n- Avoid exposing sensitive information within the connection ID.\n- Validate the connection ID format on input to prevent injection or misuse.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"conn-9876543210\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- The connection ID should not be reused for different connections.\n- It is critical for maintaining session integrity and security.\n- Should be treated as an opaque value by clients; its internal structure should not be assumed.\n\n**Dependency chain**\n- Generated during connection initialization.\n- Used by session management and message routing components.\n- Referenced in logs, audits, and debugging tools.\n\n**Technical details**\n- Typically implemented as a string data type.\n- May be generated using UUID libraries or custom algorithms.\n- Stored in connection metadata and passed in API headers or payloads as needed."},"_integrationId":{"type":"string","format":"objectId","description":"The unique identifier for the integration instance associated with the current operation or resource. This ID is used to reference and manage the specific integration within the system, ensuring that actions and data are correctly linked to the appropriate integration context.\n\n**Field behavior**\n- Uniquely identifies an integration instance.\n- Used to associate requests or resources with a specific integration.\n- Typically immutable once set to maintain consistent linkage.\n- May be required for operations involving integration-specific data or actions.\n\n**Implementation guidance**\n- Ensure the ID is generated in a globally unique manner to avoid collisions.\n- Validate the format and existence of the integration ID before processing requests.\n- Use this ID to fetch or update integration-related configurations or data.\n- Secure the ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"intg_1234567890abcdef\"\n- \"integration-987654321\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- This ID should not be confused with user IDs or other resource identifiers.\n- It is critical for maintaining the integrity of integration-related workflows.\n- Changes to this ID after creation can lead to inconsistencies or errors.\n\n**Dependency chain**\n- Dependent on the integration creation or registration process.\n- Used by downstream services or components that interact with the integration.\n- May be referenced in logs, audit trails, and monitoring systems.\n\n**Technical details**\n- Typically represented as a string.\n- May follow a specific pattern or prefix to denote integration IDs.\n- Stored in databases or configuration files linked to integration metadata.\n- Used as a key in API calls, database queries, and internal processing logic."},"_connectorId":{"type":"string","format":"objectId","description":"The unique identifier for the connector associated with the resource or operation. This ID is used to reference and manage the specific connector within the system, enabling integration and communication between different components or services.\n\n**Field behavior**\n- Uniquely identifies a connector instance.\n- Used to link resources or operations to a specific connector.\n- Typically immutable once assigned.\n- Required for operations that involve connector-specific actions.\n\n**Implementation guidance**\n- Ensure the connector ID is globally unique within the system.\n- Validate the format and existence of the connector ID before processing.\n- Use consistent naming or ID conventions to facilitate management.\n- Secure the connector ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"conn-12345\"\n- \"connector_abcde\"\n- \"uuid-550e8400-e29b-41d4-a716-446655440000\"\n\n**Important notes**\n- The connector ID must correspond to a valid and active connector.\n- Changes to the connector ID may affect linked resources or workflows.\n- Connector IDs are often generated by the system and should not be manually altered.\n\n**Dependency chain**\n- Dependent on the existence of a connector registry or management system.\n- Used by components that require connector-specific configuration or data.\n- May be referenced by authentication, routing, or integration modules.\n\n**Technical details**\n- Typically represented as a string.\n- May follow UUID, GUID, or custom ID formats.\n- Stored and transmitted as part of API requests and responses.\n- Should be indexed in databases for efficient lookup."},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this import's operations.\n\nThis field determines:\n- Which connection types are compatible with this import\n- Which API endpoints and protocols will be used\n- Which import-specific configuration objects must be provided\n- The available features and capabilities of the import\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPImport\" for generic REST/SOAP APIs\n- \"SalesforceImport\" for Salesforce-specific operations\n- \"NetSuiteDistributedImport\" for NetSuite-specific operations\n- \"FTPImport\" for file transfers via FTP/SFTP\n\nWhen creating an import, this field must be set correctly and cannot be changed afterward\nwithout creating a new import resource.\n\n**Important notes**\n- When using a specific adapter type (e.g., \"SalesforceImport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n- HTTPImport is the default adapter type for all REST/SOAP APIs and should be used unless there is a specialized adapter type for the specific API (such as SalesforceImport for Salesforce and NetSuiteDistributedImport for NetSuite).\n- WrapperImport is used for using a custom adapter built outside of Celigo, and is vary rarely used.\n- RDBMSImport is used for database imports that are built into Celigo such as SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, etc. When using RDBMSImport, populate the \"rdbms\" configuration object.\n- JDBCImport is ONLY for generic JDBC connections that are not one of the built-in RDBMS types. Do NOT use JDBCImport for Snowflake, SQL Server, MySQL, PostgreSQL, or Oracle — use RDBMSImport instead. When using JDBCImport, populate the \"jdbc\" configuration object.\n- NetSuiteDistributedImport is the default for all NetSuite imports. Always use NetSuiteDistributedImport when importing into NetSuite.\n- NetSuiteImport is a legacy adaptor for NetSuite accounts without the Celigo SuiteApp 2.0. Do NOT use NetSuiteImport unless explicitly requested.\n\nThe connection type must be compatible with the adaptorType specified for this import.\nFor example, if adaptorType is \"HTTPImport\", _connectionId must reference a connection with type \"http\".\n","enum":["HTTPImport","FTPImport","AS2Import","S3Import","NetSuiteImport","NetSuiteDistributedImport","SalesforceImport","JDBCImport","RDBMSImport","MongodbImport","DynamodbImport","WrapperImport","AiAgentImport","GuardrailImport","FileSystemImport","ToolImport"]},"externalId":{"type":"string","description":"A unique identifier assigned to an entity by an external system or source, used to reference or correlate the entity across different platforms or services. This ID facilitates integration, synchronization, and data exchange between systems by providing a consistent reference point.\n\n**Field behavior**\n- Must be unique within the context of the external system.\n- Typically immutable once assigned to ensure consistent referencing.\n- Used to link or map records between internal and external systems.\n- May be optional or required depending on integration needs.\n\n**Implementation guidance**\n- Ensure the externalId format aligns with the external system’s specifications.\n- Validate uniqueness to prevent conflicts or data mismatches.\n- Store and transmit the externalId securely to maintain data integrity.\n- Consider indexing this field for efficient lookup and synchronization.\n\n**Examples**\n- \"12345-abcde-67890\"\n- \"EXT-2023-0001\"\n- \"user_987654321\"\n- \"SKU-XYZ-1001\"\n\n**Important notes**\n- The externalId is distinct from internal system identifiers.\n- Changes to the externalId can disrupt data synchronization.\n- Should be treated as a string to accommodate various formats.\n- May include alphanumeric characters, dashes, or underscores.\n\n**Dependency chain**\n- Often linked to integration or synchronization modules.\n- May depend on external system availability and consistency.\n- Used in API calls that involve cross-system data referencing.\n\n**Technical details**\n- Data type: string.\n- Maximum length may vary based on external system constraints.\n- Should support UTF-8 encoding to handle diverse character sets.\n- May require validation against a specific pattern or format."},"as2":{"$ref":"#/components/schemas/As2"},"dynamodb":{"$ref":"#/components/schemas/Dynamodb"},"http":{"$ref":"#/components/schemas/Http"},"ftp":{"$ref":"#/components/schemas/Ftp"},"jdbc":{"$ref":"#/components/schemas/Jdbc"},"mongodb":{"$ref":"#/components/schemas/Mongodb"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"netsuite_da":{"$ref":"#/components/schemas/NetsuiteDistributed"},"rdbms":{"$ref":"#/components/schemas/Rdbms"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"file":{"$ref":"#/components/schemas/File"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"guardrail":{"$ref":"#/components/schemas/Guardrail"},"name":{"type":"string","description":"The name property represents the identifier or title assigned to an entity, object, or resource within the system. It is typically a human-readable string that uniquely distinguishes the item from others in the same context. This property is essential for referencing, searching, and displaying the entity in user interfaces and API responses.\n\n**Field behavior**\n- Must be a non-empty string.\n- Should be unique within its scope to avoid ambiguity.\n- Often used as a primary label in UI components and logs.\n- May have length constraints depending on system requirements.\n- Typically case-sensitive or case-insensitive based on implementation.\n\n**Implementation guidance**\n- Validate the input to ensure it meets length and character requirements.\n- Enforce uniqueness if required by the application context.\n- Sanitize input to prevent injection attacks or invalid characters.\n- Support internationalization by allowing Unicode characters if applicable.\n- Provide clear error messages when validation fails.\n\n**Examples**\n- \"John Doe\"\n- \"Invoice_2024_001\"\n- \"MainServer\"\n- \"Project Phoenix\"\n- \"user@example.com\"\n\n**Important notes**\n- Avoid using reserved keywords or special characters that may interfere with system processing.\n- Consider trimming whitespace from the beginning and end of the string.\n- The name should be meaningful and descriptive to improve usability.\n- Changes to the name might affect references or links in other parts of the system.\n\n**Dependency chain**\n- May be referenced by other properties such as IDs, URLs, or display labels.\n- Could be linked to permissions or access control mechanisms.\n- Often used in search and filter operations within the API.\n\n**Technical details**\n- Data type: string.\n- Encoding: UTF-8 recommended.\n- Maximum length: typically defined by system constraints (e.g., 255 characters).\n- Validation rules: regex patterns may be applied to restrict allowed characters.\n- Storage considerations: indexed for fast lookup if used frequently in queries."},"description":{"type":"string","description":"A detailed textual explanation that describes the purpose, characteristics, or context of the associated entity or item. This field is intended to provide users with clear and comprehensive information to better understand the subject it represents.\n**Field behavior**\n- Accepts plain text or formatted text depending on implementation.\n- Should be concise yet informative, avoiding overly technical jargon unless necessary.\n- May support multi-line input to allow detailed explanations.\n- Typically optional but recommended for clarity.\n\n**Implementation guidance**\n- Ensure the description is user-friendly and accessible to the target audience.\n- Avoid including sensitive or confidential information.\n- Use consistent terminology aligned with the overall documentation or product language.\n- Consider supporting markdown or rich text formatting if applicable.\n\n**Examples**\n- \"This API endpoint retrieves user profile information including name, email, and preferences.\"\n- \"A unique identifier assigned to each transaction for tracking purposes.\"\n- \"Specifies the date and time when the event occurred, formatted in ISO 8601.\"\n\n**Important notes**\n- Keep descriptions up to date with any changes in functionality or behavior.\n- Avoid redundancy with other fields; focus on clarifying the entity’s role or usage.\n- Descriptions should not contain executable code or commands.\n\n**Dependency chain**\n- Often linked to the entity or property it describes.\n- May influence user understanding and correct usage of the associated field.\n- Can be referenced in user guides, API documentation, or UI tooltips.\n\n**Technical details**\n- Typically stored as a string data type.\n- May have length constraints depending on system limitations.\n- Encoding should support UTF-8 to accommodate international characters.\n- Should be sanitized to prevent injection attacks if rendered in UI contexts.","maxLength":10240},"unencrypted":{"type":"object","description":"Indicates whether the data or content is stored or transmitted without encryption, meaning it is in plain text and not protected by any cryptographic methods. This flag helps determine if additional security measures are necessary to safeguard sensitive information.\n\n**Field behavior**\n- Represents a boolean state where `true` means data is unencrypted and `false` means data is encrypted.\n- Used to identify if the data requires encryption before storage or transmission.\n- May influence processing logic related to data security and compliance.\n\n**Implementation guidance**\n- Ensure this field is explicitly set to reflect the actual encryption status of the data.\n- Use this flag to trigger encryption routines or security checks when data is unencrypted.\n- Validate the field to prevent false positives that could lead to security vulnerabilities.\n\n**Examples**\n- `true` indicating a password stored in plain text (not recommended).\n- `false` indicating a file that has been encrypted before upload.\n- `true` for a message sent over an unsecured channel.\n\n**Important notes**\n- Storing or transmitting data with `unencrypted` set to `true` can expose sensitive information to unauthorized access.\n- This field does not perform encryption itself; it only signals the encryption status.\n- Proper handling of unencrypted data is critical for compliance with data protection regulations.\n\n**Dependency chain**\n- Often used in conjunction with fields specifying encryption algorithms or keys.\n- May affect downstream processes such as logging, auditing, or alerting mechanisms.\n- Related to security policies and access control configurations.\n\n**Technical details**\n- Typically represented as a boolean value.\n- Should be checked before performing operations that assume data confidentiality.\n- May be integrated with security frameworks to enforce encryption standards."},"sampleData":{"type":"object","description":"An array or collection of sample data entries used to demonstrate or test the functionality of the API or application. This data serves as example input or output to help users understand the expected format, structure, and content. It can include various data types depending on the context, such as strings, numbers, objects, or nested arrays.\n\n**Field behavior**\n- Contains example data that mimics real-world usage scenarios.\n- Can be used for testing, validation, or demonstration purposes.\n- May be optional or required depending on the API specification.\n- Should be representative of typical data to ensure meaningful examples.\n\n**Implementation guidance**\n- Populate with realistic and relevant sample entries.\n- Ensure data types and structures align with the API schema.\n- Avoid sensitive or personally identifiable information.\n- Update regularly to reflect any changes in the API data model.\n\n**Examples**\n- A list of user profiles with names, emails, and roles.\n- Sample product entries with IDs, descriptions, and prices.\n- Example sensor readings with timestamps and values.\n- Mock transaction records with amounts and statuses.\n\n**Important notes**\n- Sample data is for illustrative purposes and may not reflect actual production data.\n- Should not be used for live processing or decision-making.\n- Ensure compliance with data privacy and security standards when creating samples.\n\n**Dependency chain**\n- Dependent on the data model definitions within the API.\n- May influence or be influenced by validation rules and schema constraints.\n- Often linked to documentation and testing frameworks.\n\n**Technical details**\n- Typically formatted as JSON arrays or objects.\n- Can include nested structures to represent complex data.\n- Size and complexity should be balanced to optimize performance and clarity.\n- May be embedded directly in API responses or provided as separate resources."},"distributed":{"type":"boolean","description":"Indicates whether the import is using a distributed adaptor, such as a NetSuite Distributed Import.\n**Field behavior**\n- Accepts boolean values: true or false.\n- When true, the import is using a distributed adaptor such as a NetSuite Distributed Import.\n- When false, the import is not using a distributed adaptor.\n\n**Examples**\n- true: When the adaptorType is NetSuiteDistributedImport\n- false: In all other cases"},"maxAttempts":{"type":"number","description":"Specifies the maximum number of attempts allowed for a particular operation or action before it is considered failed or terminated. This property helps control retry logic and prevents infinite loops or excessive retries in processes such as authentication, data submission, or task execution.  \n**Field behavior:**  \n- Defines the upper limit on retry attempts for an operation.  \n- Once the maximum attempts are reached, no further retries are made.  \n- Can be used to trigger fallback mechanisms or error handling after exceeding attempts.  \n**Implementation** GUIDANCE:  \n- Set to a positive integer value representing the allowed retry count.  \n- Should be configured based on the operation's criticality and expected failure rates.  \n- Consider exponential backoff or delay strategies in conjunction with maxAttempts.  \n- Validate input to ensure it is a non-negative integer.  \n**Examples:**  \n- 3 (allowing up to three retry attempts)  \n- 5 (permitting five attempts before failure)  \n- 0 (disabling retries entirely)  \n**Important notes:**  \n- Setting this value too high may cause unnecessary resource consumption or delays.  \n- Setting it too low may result in premature failure without sufficient retry opportunities.  \n- Should be used in combination with timeout settings for robust error handling.  \n**Dependency chain:**  \n- Often used alongside properties like retryDelay, timeout, or backoffStrategy.  \n- May influence or be influenced by error handling or circuit breaker configurations.  \n**Technical details:**  \n- Typically represented as an integer data type.  \n- Enforced by the application logic or middleware managing retries.  \n- May be configurable at runtime or deployment time depending on system design."},"ignoreExisting":{"type":"boolean","description":"**CRITICAL FIELD:** Controls whether the import should skip records that already exist in the target system rather than attempting to update them or throwing errors.\n\nWhen set to true, the import will:\n- Check if each record already exists in the target system (using a lookup configured in the import, or if the record coming into the import has a value populated in the field configured in the ignoreExtract property)\n- Skip processing for records that are found to already exist\n- Continue processing only records that don't exist (new records)\n- This is for CREATE operations where you want to avoid duplicates\n\nWhen set to false or undefined (default):\n- All records are processed normally in the creation process (if applicable)\n- Existing records may be duplicated or may cause errors depending on the API\n- No special existence checking is performed\n\n**When to set this to true (MUST detect these PATTERNS)**\n\n**ALWAYS set ignoreExisting to true** when the user's prompt includes ANY of these patterns:\n\n**Primary Patterns (very common)**\n- \"ignore existing\" / \"ignoring existing\" / \"ignore any existing\"\n- \"skip existing\" / \"skipping existing\" / \"skip any existing\"\n- \"while ignoring existing\" / \"while skipping existing\"\n- \"skip duplicates\" / \"avoid duplicates\" / \"ignore duplicates\"\n\n**Secondary Patterns**\n- \"only create new\" / \"only add new\" / \"create new only\"\n- \"don't update existing\" / \"avoid updating existing\" / \"without updating\"\n- \"create if doesn't exist\" / \"add if doesn't exist\"\n- \"insert only\" / \"create only\"\n- \"skip if already exists\" / \"ignore if exists\"\n- \"don't process existing\" / \"bypass existing\"\n\n**Context Clues**\n- User says \"create\" or \"insert\" AND mentions checking/matching against existing data\n- User wants to add records but NOT modify what's already there\n- User is concerned about duplicate prevention during creation\n\n**Examples**\n\n**Should set ignoreExisting: true**\n- \"Create customer records in Shopify while ignoring existing customers\" → ignoreExisting: true\n- \"Create vendor records while ignoring existing vendors\" → ignoreExisting: true\n- \"Import new products only, skip existing ones\" → ignoreExisting: true\n- \"Add orders to the system, ignore any that already exist\" → ignoreExisting: true\n- \"Create accounts but don't update existing ones\" → ignoreExisting: true\n- \"Insert new contacts, skip duplicates\" → ignoreExisting: true\n- \"Create records, skipping any that match existing data\" → ignoreExisting: true\n\n**Should not set ignoreExisting: true**\n- \"Update existing customer records\" → ignoreExisting: false (update operation)\n- \"Create or update records\" → ignoreExisting: false (upsert operation)\n- \"Sync all records\" → ignoreExisting: false (sync/upsert operation)\n\n**Important**\n- This field is typically used with INSERT operations\n- Common pattern: \"Create X while ignoring existing X\" → MUST set to true\n\n**When to leave this false/undefined**\n\nLeave this false or undefined when:\n- The prompt doesn't mention skipping or ignoring existing records\n- The import is only performing updates (no inserts)\n- The prompt says \"sync\", \"update\", or \"upsert\"\n\n**Important notes**\n- This flag requires a properly configured lookup to identify existing records, or have an ignoreExtract field configured to identify the field that is used to determine if the record already exists.\n- The lookup typically checks a unique identifier field (id, email, external_id, etc.)\n- When true, existing records are silently skipped (not updated, not errored)\n- Use this for insert-only or create-only operations\n- Do NOT use this if the user wants to update existing records\n\n**Technical details**\n- Works in conjunction with lookup configurations to determine if a record exists or has an ignoreExtract field configured to determine if the record already exists.\n- The lookup queries the target system before attempting to create the record\n- If the lookup returns a match, the record is skipped\n- If the lookup returns no match, the record is processed normally"},"ignoreMissing":{"type":"boolean","description":"Indicates whether the system should ignore missing or non-existent resources during processing. When set to true, the operation will proceed without error even if some referenced items are not found; when false, the absence of required resources will cause the operation to fail or return an error.  \n**Field behavior:**  \n- Controls error handling related to missing resources.  \n- Allows operations to continue gracefully when some data is unavailable.  \n- Affects the robustness and fault tolerance of the process.  \n**Implementation guidance:**  \n- Use true to enable lenient processing where missing data is acceptable.  \n- Use false to enforce strict validation and fail fast on missing resources.  \n- Ensure that downstream logic can handle partial results if ignoring missing items.  \n**Examples:**  \n- ignoreMissing: true — skips over missing files during batch processing.  \n- ignoreMissing: false — throws an error if a referenced database entry is not found.  \n**Important notes:**  \n- Setting this to true may lead to incomplete results if missing data is critical.  \n- Default behavior should be clearly documented to avoid unexpected failures.  \n- Consider logging or reporting missing items even when ignoring them.  \n**Dependency** CHAIN:  \n- Often used in conjunction with resource identifiers or references.  \n- May impact validation and error reporting modules.  \n**Technical details:**  \n- Typically a boolean flag.  \n- Influences control flow and exception handling mechanisms.  \n- Should be clearly defined in API contracts to manage client expectations."},"idLockTemplate":{"type":"string","description":"The unique identifier for the lock template used to define the configuration and behavior of a lock within the system. This ID links the lock instance to a predefined template that specifies its properties, access rules, and operational parameters.\n\n**Field behavior**\n- Uniquely identifies a lock template within the system.\n- Used to associate a lock instance with its configuration template.\n- Typically immutable once assigned to ensure consistent behavior.\n\n**Implementation guidance**\n- Must be a valid identifier corresponding to an existing lock template.\n- Should be validated against the list of available templates before assignment.\n- Ensure uniqueness to prevent conflicts between different lock configurations.\n\n**Examples**\n- \"template-12345\"\n- \"lockTemplateA1B2C3\"\n- \"LT-987654321\"\n\n**Important notes**\n- Changing the template ID after lock creation may lead to inconsistent lock behavior.\n- The template defines critical lock parameters; ensure the correct template is referenced.\n- This field is essential for lock provisioning and management workflows.\n\n**Dependency chain**\n- Depends on the existence of predefined lock templates in the system.\n- Used by lock management and access control modules to apply configurations.\n\n**Technical details**\n- Typically represented as a string or UUID.\n- Should conform to the system’s identifier format standards.\n- Indexed for efficient lookup and retrieval in the database."},"dataURITemplate":{"type":"string","description":"A template string used to construct data URIs dynamically by embedding variable placeholders that are replaced with actual values at runtime. This template facilitates the generation of standardized data URIs for accessing or referencing resources within the system.\n\n**Field behavior**\n- Accepts a string containing placeholders for variables.\n- Placeholders are replaced with corresponding runtime values to form a complete data URI.\n- Enables dynamic and flexible URI generation based on context or input parameters.\n- Supports standard URI encoding where necessary to ensure valid URI formation.\n\n**Implementation guidance**\n- Define clear syntax for variable placeholders within the template (e.g., using curly braces or another delimiter).\n- Ensure that all required variables are provided at runtime to avoid incomplete URIs.\n- Validate the final constructed URI to confirm it adheres to URI standards.\n- Consider supporting optional variables with default values to increase template flexibility.\n\n**Examples**\n- \"data:text/plain;base64,{base64EncodedData}\"\n- \"data:{mimeType};charset={charset},{data}\"\n- \"data:image/{imageFormat};base64,{encodedImageData}\"\n\n**Important notes**\n- The template must produce a valid data URI after variable substitution.\n- Proper encoding of variable values is critical to prevent malformed URIs.\n- This property is essential for scenarios requiring inline resource embedding or data transmission via URIs.\n- Misconfiguration or missing variables can lead to invalid or unusable URIs.\n\n**Dependency chain**\n- Relies on the availability of runtime variables corresponding to placeholders.\n- May depend on encoding utilities to prepare variable values.\n- Often used in conjunction with data processing or resource generation components.\n\n**Technical details**\n- Typically a UTF-8 encoded string.\n- Supports standard URI schemes and encoding rules.\n- Variable placeholders should be clearly defined and consistently parsed.\n- May integrate with templating engines or string interpolation mechanisms."},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"blobKeyPath":{"type":"string","description":"Specifies the file system path to the blob key, which uniquely identifies the binary large object (blob) within the storage system. This path is used to locate and access the blob data efficiently during processing or retrieval operations.\n**Field behavior**\n- Represents a string path pointing to the location of the blob key.\n- Used as a reference to access the associated blob data.\n- Typically immutable once set to ensure consistent access.\n- May be required for operations involving blob retrieval or manipulation.\n**Implementation guidance**\n- Ensure the path is valid and correctly formatted according to the storage system's conventions.\n- Validate the existence of the blob key at the specified path before processing.\n- Handle cases where the path may be null or empty gracefully.\n- Secure the path to prevent unauthorized access to blob data.\n**Examples**\n- \"/data/blobs/abc123def456\"\n- \"blobstore/keys/2024/06/blobkey789\"\n- \"s3://bucket-name/blob-keys/key123\"\n**Important notes**\n- The blobKeyPath must correspond to an existing blob key in the storage backend.\n- Incorrect or malformed paths can lead to failures in blob retrieval.\n- Access permissions to the path must be properly configured.\n- The format of the path may vary depending on the underlying storage technology.\n**Dependency chain**\n- Dependent on the storage system's directory or key structure.\n- Used by blob retrieval and processing components.\n- May be linked to metadata describing the blob.\n**Technical details**\n- Typically represented as a UTF-8 encoded string.\n- May include hierarchical directory structures or URI schemes.\n- Should be sanitized to prevent injection or path traversal vulnerabilities.\n- Often used as a lookup key in blob storage APIs or databases."},"blob":{"type":"boolean","description":"The binary large object (blob) representing the raw data content to be processed, stored, or transmitted. This field typically contains encoded or serialized data such as images, files, or other multimedia content in a compact binary format.\n\n**Field behavior**\n- Holds the actual binary data payload.\n- Can represent various data types including images, documents, or other file formats.\n- May be encoded in formats like Base64 for safe transmission over text-based protocols.\n- Treated as opaque data by the API, with no interpretation unless specified.\n\n**Implementation guidance**\n- Ensure the blob data is properly encoded (e.g., Base64) if the transport medium requires text-safe encoding.\n- Validate the size and format of the blob to meet API constraints.\n- Handle decoding and encoding consistently on both client and server sides.\n- Use streaming or chunking for very large blobs to optimize performance.\n\n**Examples**\n- A Base64-encoded JPEG image file.\n- A serialized JSON object converted into a binary format.\n- A PDF document encoded as a binary blob.\n- An audio file represented as a binary stream.\n\n**Important notes**\n- The blob content is typically opaque and should not be altered during transmission.\n- Size limits may apply depending on API or transport constraints.\n- Proper encoding and decoding are critical to avoid data corruption.\n- Security considerations should be taken into account when handling binary data.\n\n**Dependency chain**\n- May depend on encoding schemes (e.g., Base64) for safe transmission.\n- Often used in conjunction with metadata fields describing the blob type or format.\n- Requires appropriate content-type headers or descriptors for correct interpretation.\n\n**Technical details**\n- Usually represented as a byte array or Base64-encoded string in JSON APIs.\n- May require MIME type specification to indicate the nature of the data.\n- Handling may involve buffer management and memory considerations.\n- Supports binary-safe transport mechanisms to preserve data integrity."},"assistant":{"type":"string","description":"Specifies the configuration and behavior settings for the AI assistant that will interact with the user. This property defines how the assistant responds, including its personality, knowledge scope, response style, and any special instructions or constraints that guide its operation.\n\n**Field behavior**\n- Determines the assistant's tone, style, and manner of communication.\n- Controls the knowledge base or data sources the assistant can access.\n- Enables customization of the assistant's capabilities and limitations.\n- May include parameters for language, verbosity, and response format.\n\n**Implementation guidance**\n- Ensure the assistant configuration aligns with the intended user experience.\n- Validate that all required sub-properties within the assistant configuration are correctly set.\n- Support dynamic updates to the assistant settings to adapt to different contexts or user needs.\n- Provide defaults for unspecified settings to maintain consistent behavior.\n\n**Examples**\n- Setting the assistant to a formal tone with technical expertise.\n- Configuring the assistant to provide concise answers with references.\n- Defining the assistant to operate within a specific domain, such as healthcare or finance.\n- Enabling multi-language support for the assistant responses.\n\n**Important notes**\n- Changes to the assistant property can significantly affect user interaction quality.\n- Properly securing and validating assistant configurations is critical to prevent misuse.\n- The assistant's behavior should comply with ethical guidelines and privacy regulations.\n- Overly restrictive settings may limit the assistant's usefulness, while too broad settings may reduce relevance.\n\n**Dependency chain**\n- May depend on user preferences or session context.\n- Interacts with the underlying AI model and its capabilities.\n- Influences downstream processing of user inputs and outputs.\n- Can be linked to external knowledge bases or APIs for enhanced responses.\n\n**Technical details**\n- Typically structured as an object containing multiple nested properties.\n- May include fields such as personality traits, knowledge cutoff dates, and response constraints.\n- Supports serialization and deserialization for API communication.\n- Requires compatibility with the AI platform's configuration schema."},"deleteAfterImport":{"type":"boolean","description":"Indicates whether the source file should be deleted automatically after the import process completes successfully. This boolean flag helps manage storage by removing files that are no longer needed once their data has been ingested.\n\n**Field behavior**\n- When set to true, the system deletes the source file immediately after a successful import.\n- When set to false or omitted, the source file remains intact after import.\n- Deletion only occurs if the import process completes without errors.\n\n**Implementation guidance**\n- Validate that the import was successful before deleting the file.\n- Ensure proper permissions are in place to delete the file.\n- Consider logging deletion actions for audit purposes.\n- Provide clear user documentation about the implications of enabling this flag.\n\n**Examples**\n- true: The file \"data.csv\" will be removed after import.\n- false: The file \"data.csv\" will remain after import for manual review or backup.\n\n**Important notes**\n- Enabling this flag may result in data loss if the file is needed for re-import or troubleshooting.\n- Use with caution in environments where file retention policies are strict.\n- This flag does not affect files that fail to import.\n\n**Dependency chain**\n- Depends on the successful completion of the import operation.\n- May interact with file system permissions and cleanup routines.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Triggers a file deletion API or system call post-import.\n- Should handle exceptions gracefully to avoid orphaned files or unintended deletions."},"assistantMetadata":{"type":"object","description":"Metadata containing additional information about the assistant, such as configuration details, versioning, capabilities, and any custom attributes that help in managing or identifying the assistant's behavior and context. This metadata supports enhanced control, monitoring, and customization of the assistant's interactions.\n\n**Field behavior**\n- Holds supplementary data that describes or configures the assistant.\n- Can include static or dynamic information relevant to the assistant's operation.\n- Used to influence or inform the assistant's responses and functionality.\n- May be updated or extended to reflect changes in the assistant's capabilities or environment.\n\n**Implementation guidance**\n- Structure metadata in a clear, consistent format (e.g., key-value pairs).\n- Ensure sensitive information is not exposed through metadata.\n- Use metadata to enable feature toggles, version tracking, or context awareness.\n- Validate metadata content to maintain integrity and compatibility.\n\n**Examples**\n- Version number of the assistant (e.g., \"v1.2.3\").\n- Supported languages or locales.\n- Enabled features or modules.\n- Custom tags indicating the assistant's domain or purpose.\n\n**Important notes**\n- Metadata should not contain user-specific or sensitive personal data.\n- Keep metadata concise to avoid performance overhead.\n- Changes in metadata may affect assistant behavior; manage updates carefully.\n- Metadata is primarily for internal use and may not be exposed to end users.\n\n**Dependency chain**\n- May depend on assistant configuration settings.\n- Can influence or be referenced by response generation logic.\n- Interacts with monitoring and analytics systems for tracking.\n\n**Technical details**\n- Typically represented as a JSON object or similar structured data.\n- Should be easily extensible to accommodate future attributes.\n- Must be compatible with the overall assistant API schema.\n- Should support serialization and deserialization without data loss."},"useTechAdaptorForm":{"type":"boolean","description":"Indicates whether the technical adaptor form should be utilized in the current process or workflow. This boolean flag determines if the system should enable and display the technical adaptor form interface for user interaction or automated processing.\n\n**Field behavior**\n- When set to true, the technical adaptor form is activated and presented to the user or system.\n- When set to false, the technical adaptor form is bypassed or hidden.\n- Influences the flow of data input and validation related to technical adaptor configurations.\n\n**Implementation guidance**\n- Default to false if the technical adaptor form is optional or not always required.\n- Ensure that enabling this flag triggers all necessary UI components and backend processes related to the technical adaptor form.\n- Validate that dependent fields or modules respond appropriately when this flag changes state.\n\n**Examples**\n- true: The system displays the technical adaptor form for configuration.\n- false: The system skips the technical adaptor form and proceeds without it.\n\n**Important notes**\n- Changing this flag may affect downstream processing and data integrity.\n- Must be synchronized with user permissions and feature availability.\n- Should be clearly documented in user guides if exposed in UI settings.\n\n**Dependency chain**\n- May depend on user roles or feature toggles enabling technical adaptor functionality.\n- Could influence or be influenced by other configuration flags related to adaptor workflows.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Used in conditional logic to control UI rendering and backend processing paths.\n- May trigger event listeners or hooks when its value changes."},"distributedAdaptorData":{"type":"object","description":"Contains configuration and operational data specific to distributed adaptors used within the system. This data includes parameters, status information, and metadata necessary for managing and monitoring distributed adaptor instances across different nodes or environments. It facilitates synchronization, performance tuning, and error handling for adaptors operating in a distributed architecture.\n\n**Field behavior**\n- Holds detailed information about each distributed adaptor instance.\n- Supports dynamic updates to reflect real-time adaptor status and configuration changes.\n- Enables centralized management and monitoring of distributed adaptors.\n- May include nested structures representing various adaptor attributes and metrics.\n\n**Implementation guidance**\n- Ensure data consistency and integrity when updating distributedAdaptorData.\n- Use appropriate serialization formats to handle complex nested data.\n- Implement validation to verify the correctness of adaptor configuration parameters.\n- Design for scalability to accommodate varying numbers of distributed adaptors.\n\n**Examples**\n- Configuration settings for a distributed database adaptor including connection strings and timeout values.\n- Status reports indicating the health and performance metrics of adaptors deployed across multiple servers.\n- Metadata describing adaptor version, deployment environment, and last synchronization timestamp.\n\n**Important notes**\n- This property is critical for systems relying on distributed adaptors to function correctly.\n- Changes to this data should be carefully managed to avoid disrupting adaptor operations.\n- Sensitive information within adaptor data should be secured and access-controlled.\n\n**Dependency chain**\n- Dependent on the overall system configuration and deployment topology.\n- Interacts with monitoring and management modules that consume adaptor data.\n- May influence load balancing and failover mechanisms within the distributed system.\n\n**Technical details**\n- Typically represented as a complex object or map with key-value pairs.\n- May include arrays or lists to represent multiple adaptor instances.\n- Requires efficient parsing and serialization to minimize performance overhead.\n- Should support versioning to handle changes in adaptor data schema over time."},"filter":{"allOf":[{"description":"Filter configuration for selectively processing records during import operations.\n\n**Import-specific behavior**\n\n**Pre-Import Filtering**: Filters are applied to incoming records before they are sent to the destination system. Records that don't match the filter criteria are silently dropped and not imported.\n\n**Available filter fields**\n\nThe fields available for filtering are the data fields from each record being imported.\n"},{"$ref":"#/components/schemas/Filter"}]},"traceKeyTemplate":{"type":"string","description":"A template string used to generate unique trace keys for tracking and correlating requests across distributed systems. This template supports placeholders that can be dynamically replaced with runtime values such as timestamps, unique identifiers, or contextual metadata to create meaningful and consistent trace keys.\n\n**Field behavior**\n- Defines the format and structure of trace keys used in logging and monitoring.\n- Supports dynamic insertion of variables or placeholders to customize trace keys per request.\n- Ensures trace keys are unique and consistent for effective traceability.\n- Can be used to correlate logs, metrics, and traces across different services.\n\n**Implementation guidance**\n- Use clear and descriptive placeholders that map to relevant runtime data (e.g., {timestamp}, {requestId}).\n- Ensure the template produces trace keys that are unique enough to avoid collisions.\n- Validate the template syntax before applying it to avoid runtime errors.\n- Consider including service names or environment identifiers for multi-service deployments.\n- Keep the template concise to avoid excessively long trace keys.\n\n**Examples**\n- \"trace-{timestamp}-{requestId}\"\n- \"{serviceName}-{env}-{uniqueId}\"\n- \"traceKey-{userId}-{sessionId}-{epochMillis}\"\n\n**Important notes**\n- The template must be compatible with the tracing and logging infrastructure in use.\n- Placeholders should be well-defined and documented to ensure consistent usage.\n- Improper templates may lead to trace key collisions or loss of traceability.\n- Avoid sensitive information in trace keys to maintain security and privacy.\n\n**Dependency chain**\n- Relies on runtime context or metadata to replace placeholders.\n- Integrates with logging, monitoring, and tracing systems that consume trace keys.\n- May depend on unique identifier generators or timestamp providers.\n\n**Technical details**\n- Typically implemented as a string with placeholder syntax (e.g., curly braces).\n- Requires a parser or formatter to replace placeholders with actual values at runtime.\n- Should support extensibility to add new placeholders as needed.\n- May need to conform to character restrictions imposed by downstream systems."},"mockResponse":{"type":"object","description":"Defines a predefined response that the system will return when the associated API endpoint is invoked, allowing for simulation of real API behavior without requiring actual backend processing. This is particularly useful for testing, development, and demonstration purposes where consistent and controlled responses are needed.\n\n**Field behavior**\n- Returns the specified mock data instead of executing the real API logic.\n- Can simulate various response scenarios including success, error, and edge cases.\n- Overrides the default response when enabled.\n- Supports static or dynamic content depending on implementation.\n\n**Implementation guidance**\n- Ensure the mock response format matches the expected API response schema.\n- Use realistic data to closely mimic actual API behavior.\n- Update mock responses regularly to reflect changes in the real API.\n- Consider including headers, status codes, and body content in the mock response.\n- Provide clear documentation on when and how the mock response is used.\n\n**Examples**\n- Returning a fixed JSON object representing a user profile.\n- Simulating a 404 error response with an error message.\n- Providing a list of items with pagination metadata.\n- Mocking a successful transaction confirmation message.\n\n**Important notes**\n- Mock responses should not be used in production environments unless explicitly intended.\n- They are primarily for development, testing, and demonstration.\n- Overuse of mock responses can mask issues in the real API implementation.\n- Ensure that consumers of the API are aware when mock responses are active.\n\n**Dependency chain**\n- Dependent on the API endpoint configuration.\n- May interact with request parameters to determine the appropriate mock response.\n- Can be linked with feature flags or environment settings to toggle mock behavior.\n\n**Technical details**\n- Typically implemented as static JSON or XML payloads.\n- May include HTTP status codes and headers.\n- Can be integrated with API gateways or mocking tools.\n- Supports conditional logic in advanced implementations to vary responses."},"_ediProfileId":{"type":"string","format":"objectId","description":"The unique identifier for the Electronic Data Interchange (EDI) profile associated with the transaction or entity. This ID links the current data to a specific EDI configuration that defines the format, protocols, and trading partner details used for electronic communication.\n\n**Field behavior**\n- Uniquely identifies an EDI profile within the system.\n- Used to retrieve or reference EDI settings and parameters.\n- Must be consistent and valid to ensure proper EDI processing.\n- Typically immutable once assigned to maintain data integrity.\n\n**Implementation guidance**\n- Ensure the ID corresponds to an existing and active EDI profile in the system.\n- Validate the format and existence of the ID before processing transactions.\n- Use this ID to fetch EDI-specific rules, mappings, and partner information.\n- Secure the ID to prevent unauthorized changes or misuse.\n\n**Examples**\n- \"EDI12345\"\n- \"PROF-67890\"\n- \"X12_PROFILE_001\"\n\n**Important notes**\n- This ID is critical for routing and formatting EDI messages correctly.\n- Incorrect or missing IDs can lead to failed or misrouted EDI transmissions.\n- The ID should be managed centrally to avoid duplication or conflicts.\n\n**Dependency chain**\n- Depends on the existence of predefined EDI profiles in the system.\n- Used by EDI processing modules to apply correct communication standards.\n- May be linked to trading partner configurations and document types.\n\n**Technical details**\n- Typically a string or alphanumeric value.\n- Stored in a database with indexing for quick lookup.\n- May follow a naming convention defined by the organization or EDI standards.\n- Used as a foreign key in relational data models involving EDI transactions."},"parsers":{"type":["string","null"],"enum":["1"],"description":"A collection of parser configurations that define how input data should be interpreted and processed. Each parser specifies rules, patterns, or formats to extract meaningful information from raw data sources, enabling the system to handle diverse data types and structures effectively. This property allows customization and extension of parsing capabilities to accommodate various input formats.\n\n**Field behavior**\n- Accepts multiple parser definitions as an array or list.\n- Each parser operates independently to process specific data formats.\n- Parsers are applied in the order they are defined unless otherwise specified.\n- Supports enabling or disabling individual parsers dynamically.\n- Can include built-in or custom parser implementations.\n\n**Implementation guidance**\n- Ensure parsers are well-defined with clear matching criteria and extraction rules.\n- Validate parser configurations to prevent conflicts or overlaps.\n- Provide mechanisms to add, update, or remove parsers without downtime.\n- Support extensibility to integrate new parsing logic as needed.\n- Document each parser’s purpose and expected input/output formats.\n\n**Examples**\n- A JSON parser that extracts fields from JSON-formatted input.\n- A CSV parser that splits input lines into columns based on delimiters.\n- A regex-based parser that identifies patterns within unstructured text.\n- An XML parser that navigates hierarchical data structures.\n- A custom parser designed to interpret proprietary log file formats.\n\n**Important notes**\n- Incorrect parser configurations can lead to data misinterpretation or processing errors.\n- Parsers should be optimized for performance to handle large volumes of data efficiently.\n- Consider security implications when parsing untrusted input to avoid injection attacks.\n- Testing parsers with representative sample data is crucial for reliability.\n- Parsers may depend on external libraries or modules; ensure compatibility.\n\n**Dependency chain**\n- Relies on input data being available and accessible.\n- May depend on schema definitions or data contracts for accurate parsing.\n- Interacts with downstream components that consume parsed output.\n- Can be influenced by global settings such as character encoding or locale.\n- May require synchronization with data validation and transformation steps.\n\n**Technical details**\n- Typically implemented as modular components or plugins.\n- Configurations may include pattern definitions, field mappings, and error handling rules.\n- Supports various data formats including text, binary, and structured documents.\n- May expose APIs or interfaces for runtime configuration and monitoring.\n- Often integrated with logging and debugging tools to trace parsing operations."},"hooks":{"type":"object","properties":{"preMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the pre-mapping hook phase. This function allows for custom logic to be applied before the main mapping process begins, enabling data transformation, validation, or any preparatory steps required. It should be defined in a way that it can be invoked with the appropriate context and input data.\n\n**Field behavior**\n- Executed before the mapping process starts.\n- Receives input data and context for processing.\n- Can modify or validate data prior to mapping.\n- Should return the processed data or relevant output for the next stage.\n\n**Implementation guidance**\n- Define the function with clear input parameters and expected output.\n- Ensure the function handles errors gracefully to avoid interrupting the mapping flow.\n- Keep the function focused on pre-mapping concerns only.\n- Test the function independently to verify its behavior before integration.\n\n**Examples**\n- A function that normalizes input data formats.\n- A function that filters out invalid entries.\n- A function that enriches data with additional attributes.\n- A function that logs input data for auditing purposes.\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous operations.\n- Avoid side effects that could impact other parts of the mapping process.\n- Ensure compatibility with the overall data pipeline and mapping framework.\n- Document the function’s purpose and usage clearly for maintainability.\n\n**Dependency chain**\n- Invoked after the preMap hook is triggered.\n- Outputs data consumed by the subsequent mapping logic.\n- May depend on external utilities or services for data processing.\n\n**Technical details**\n- Typically implemented as a JavaScript or TypeScript function.\n- Receives parameters such as input data object and context metadata.\n- Returns a transformed data object or a promise resolving to it.\n- Integrated into the mapping engine’s hook system for execution timing."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed during the pre-mapping hook phase. This ID is used to reference and invoke the specific script that performs custom logic or transformations before the main mapping process begins.\n**Field behavior**\n- Specifies which script is triggered before the mapping operation.\n- Must correspond to a valid and existing script within the system.\n- Determines the custom pre-processing logic applied to data.\n**Implementation guidance**\n- Ensure the script ID is correctly registered and accessible in the environment.\n- Validate the script's compatibility with the preMap hook context.\n- Use consistent naming conventions for script IDs to avoid conflicts.\n**Examples**\n- \"script12345\"\n- \"preMapTransform_v2\"\n- \"hookScript_001\"\n**Important notes**\n- An invalid or missing script ID will result in the preMap hook being skipped or causing an error.\n- The script associated with this ID should be optimized for performance to avoid delays.\n- Permissions must be set appropriately to allow execution of the referenced script.\n**Dependency chain**\n- Depends on the existence of the script repository or script management system.\n- Linked to the preMap hook lifecycle event in the mapping process.\n- May interact with input data and influence subsequent mapping steps.\n**Technical details**\n- Typically represented as a string identifier.\n- Used internally by the system to locate and execute the script.\n- May be stored in a database or configuration file referencing available scripts."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the current operation or context. This ID is used to reference and manage the specific stack within the system, ensuring that all related resources and actions are correctly linked.  \n**Field behavior:**  \n- Uniquely identifies a stack instance within the environment.  \n- Used internally to track and manage stack-related operations.  \n- Typically immutable once assigned for a given stack lifecycle.  \n**Implementation guidance:**  \n- Should be generated or assigned by the system managing the stacks.  \n- Must be validated to ensure uniqueness within the scope of the application.  \n- Should be securely stored and transmitted to prevent unauthorized access or manipulation.  \n**Examples:**  \n- \"stack-12345abcde\"  \n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/50d6f0a0-1234-11e8-9a8b-0a1234567890\"  \n- \"projX-envY-stackZ\"  \n**Important notes:**  \n- This ID is critical for correlating resources and operations to the correct stack.  \n- Changing the stack ID after creation can lead to inconsistencies and errors.  \n- Should not be confused with other identifiers like resource IDs or deployment IDs.  \n**Dependency** CHAIN:  \n- Depends on the stack creation or registration process to obtain the ID.  \n- Used by hooks, deployment scripts, and resource managers to reference the stack.  \n**Technical details:**  \n- Typically a string value, possibly following a specific format or pattern.  \n- May include alphanumeric characters, dashes, and colons depending on the system.  \n- Often used as a key in databases or APIs to retrieve stack information."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the preMap hook, allowing customization of its execution prior to the mapping process. This property enables users to specify options such as input validation rules, transformation parameters, or conditional logic that tailor the hook's operation to specific requirements.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the preMap hook.\n- Influences how the preMap hook processes data before mapping occurs.\n- Can enable or disable specific features or validations within the hook.\n- Supports dynamic adjustment of hook behavior based on provided settings.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema to ensure correctness.\n- Provide default values for optional settings to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n- Allow extensibility to accommodate future configuration parameters without breaking compatibility.\n\n**Examples**\n- Setting a flag to enable strict input validation before mapping.\n- Specifying a list of fields to exclude from the mapping process.\n- Defining transformation rules such as trimming whitespace or converting case.\n- Providing conditional logic parameters to skip mapping under certain conditions.\n\n**Important notes**\n- Incorrect configuration values may cause the preMap hook to fail or behave unexpectedly.\n- Configuration should be tested thoroughly to ensure it aligns with the intended data processing flow.\n- Changes to configuration may affect downstream processes relying on the mapped data.\n- Sensitive information should not be included in the configuration to avoid security risks.\n\n**Dependency chain**\n- Depends on the preMap hook being enabled and invoked during the processing pipeline.\n- Influences the input data that will be passed to subsequent mapping stages.\n- May interact with other hook configurations or global settings affecting data transformation.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and applied at runtime before the mapping logic executes.\n- Supports nested configuration properties for complex customization.\n- Should be immutable during the execution of the preMap hook to ensure consistency."}},"description":"A hook function that is executed before the mapping process begins. This function allows for custom preprocessing or transformation of data prior to the main mapping logic being applied. It is typically used to modify input data, validate conditions, or set up necessary context for the mapping operation.\n\n**Field behavior**\n- Invoked once before the mapping starts.\n- Receives the raw input data or context.\n- Can modify or replace the input data before mapping.\n- Can abort or alter the flow based on custom logic.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment.\n- Ensure it returns the modified data or context for the mapping to proceed.\n- Handle errors gracefully to avoid disrupting the mapping process.\n- Use for data normalization, enrichment, or validation before mapping.\n\n**Examples**\n- Sanitizing input strings to remove unwanted characters.\n- Adding default values to missing fields.\n- Logging or auditing input data before transformation.\n- Validating input schema and throwing errors if invalid.\n\n**Important notes**\n- This hook is optional; if not provided, mapping proceeds with original input.\n- Changes made here directly affect the mapping outcome.\n- Avoid heavy computations to prevent performance bottlenecks.\n- Should not perform side effects that depend on mapping results.\n\n**Dependency chain**\n- Executed before the main mapping function.\n- Influences the data passed to subsequent mapping hooks or steps.\n- May affect downstream validation or output generation.\n\n**Technical details**\n- Typically implemented as a function with signature (inputData, context) => modifiedData.\n- Supports both synchronous return values and Promises for async operations.\n- Integrated into the mapping pipeline as the first step.\n- Can be configured or replaced depending on mapping framework capabilities."},"postMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed during the post-map hook phase. This function is invoked after the mapping process completes, allowing for custom processing, validation, or transformation of the mapped data. It should be defined within the appropriate scope and adhere to the expected signature for post-map hook functions.\n\n**Field behavior**\n- Defines the callback function to run after the mapping operation.\n- Enables customization and extension of the mapping logic.\n- Executes only once per mapping cycle during the post-map phase.\n- Can modify or augment the mapped data before final output.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function in the execution context.\n- The function should handle any necessary error checking and data validation.\n- Avoid long-running or blocking operations within the function to maintain performance.\n- Document the function’s expected inputs and outputs clearly.\n\n**Examples**\n- \"validateMappedData\"\n- \"transformPostMapping\"\n- \"logMappingResults\"\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous behavior if supported.\n- If the function is not defined or invalid, the post-map hook will be skipped without error.\n- This hook is optional but recommended for complex mapping scenarios requiring additional processing.\n\n**Dependency chain**\n- Depends on the mapping process completing successfully.\n- May rely on data structures produced by the mapping phase.\n- Should be compatible with other hooks or lifecycle events in the mapping pipeline.\n\n**Technical details**\n- Expected to be a string representing the function name.\n- The function should accept parameters as defined by the hook interface.\n- Execution context must have access to the function at runtime.\n- Errors thrown within the function may affect the overall mapping operation depending on error handling policies."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed in the postMap hook. This ID is used to reference and invoke the specific script after the mapping process completes, enabling custom logic or transformations to be applied.  \n**Field behavior:**  \n- Accepts a string representing the script's unique ID.  \n- Must correspond to an existing script within the system.  \n- Triggers the execution of the associated script after the mapping operation finishes.  \n**Implementation guidance:**  \n- Ensure the script ID is valid and the script is properly deployed before referencing it here.  \n- Use this field to extend or customize the behavior of the mapping process via scripting.  \n- Validate the script ID to prevent runtime errors during hook execution.  \n**Examples:**  \n- \"script_12345\"  \n- \"postMapTransformScript\"  \n- \"customHookScript_v2\"  \n**Important notes:**  \n- The script referenced must be compatible with the postMap hook context.  \n- Incorrect or missing script IDs will result in the hook not executing as intended.  \n- This field is optional if no post-mapping script execution is required.  \n**Dependency chain:**  \n- Depends on the existence of the script repository or script management system.  \n- Relies on the postMap hook lifecycle event to trigger execution.  \n**Technical details:**  \n- Typically a string data type.  \n- Used internally to fetch and execute the script logic.  \n- May require permissions or access rights to execute the referenced script."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-map hook. This ID is used to reference and manage the specific stack instance within the system, enabling precise tracking and manipulation of resources or configurations tied to that stack during post-processing operations.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used during post-map hook execution to associate actions with the correct stack.\n- Immutable once set to ensure consistent reference throughout the lifecycle.\n- Required for operations that involve stack-specific context or resource management.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be validated for format and existence before use.\n- Ensure secure handling to prevent unauthorized access or manipulation.\n- Typically generated by the system and passed to hooks automatically.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for linking hook operations to the correct stack context.\n- Incorrect or missing _stackId can lead to failed or misapplied post-map operations.\n- Should not be altered manually once assigned.\n\n**Dependency chain**\n- Depends on the stack creation or registration process that generates the ID.\n- Used by post-map hooks to perform stack-specific logic.\n- May influence downstream processes that rely on stack context.\n\n**Technical details**\n- Typically a string value.\n- Format may vary depending on the system's stack naming conventions.\n- Stored and transmitted securely to maintain integrity.\n- Used as a key in database queries or API calls related to stack management."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the postMap hook. This object allows customization of the hook's execution by specifying various options and values that control its operation.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the postMap hook.\n- Determines how the postMap hook processes data after mapping.\n- Can include flags, thresholds, or other parameters influencing hook logic.\n- Optional or required depending on the specific implementation of the postMap hook.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema before use.\n- Ensure all required fields within the configuration are present and correctly typed.\n- Use default values for any missing optional parameters to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n\n**Examples**\n- `{ \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"timeout\": 5000, \"retryOnFailure\": false }`\n- `{ \"mappingStrategy\": \"strict\", \"caseSensitive\": true }`\n\n**Important notes**\n- Incorrect configuration values may cause the postMap hook to fail or behave unexpectedly.\n- Configuration should be tailored to the specific needs of the postMap operation.\n- Changes to configuration may require restarting or reinitializing the hook process.\n\n**Dependency chain**\n- Depends on the postMap hook being invoked.\n- May influence downstream processes that rely on the output of the postMap hook.\n- Interacts with other hook configurations if multiple hooks are chained.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and validated at runtime before the hook executes.\n- Supports nested objects and arrays if the hook logic requires complex configurations."}},"description":"A hook function that is executed after the mapping process is completed. This function allows for custom logic to be applied to the mapped data, enabling modifications, validations, or side effects based on the results of the mapping operation. It receives the mapped data as input and can return a modified version or perform asynchronous operations.\n\n**Field behavior**\n- Invoked immediately after the mapping process finishes.\n- Receives the output of the mapping as its input parameter.\n- Can modify, augment, or validate the mapped data.\n- Supports synchronous or asynchronous execution.\n- The returned value from this hook replaces or updates the mapped data.\n\n**Implementation guidance**\n- Implement as a function that accepts the mapped data object.\n- Ensure proper handling of asynchronous operations if needed.\n- Use this hook to enforce business rules or data transformations post-mapping.\n- Avoid side effects that could interfere with subsequent processing unless intentional.\n- Validate the integrity of the data before returning it.\n\n**Examples**\n- Adjusting date formats or normalizing strings after mapping.\n- Adding computed properties based on mapped fields.\n- Logging or auditing mapped data for monitoring purposes.\n- Filtering out unwanted fields or entries from the mapped result.\n- Triggering notifications or events based on mapped data content.\n\n**Important notes**\n- This hook is optional and only invoked if defined.\n- Errors thrown within this hook may affect the overall mapping operation.\n- The hook should be performant to avoid slowing down the mapping pipeline.\n- Returned data must conform to expected schema to prevent downstream errors.\n\n**Dependency chain**\n- Depends on the completion of the primary mapping function.\n- May influence subsequent processing steps that consume the mapped data.\n- Should be coordinated with pre-mapping hooks to maintain data consistency.\n\n**Technical details**\n- Typically implemented as a callback or promise-returning function.\n- Receives one argument: the mapped data object.\n- Returns either the modified data object or a promise resolving to it.\n- Integrated into the mapping lifecycle after the main mapping logic completes."},"postSubmit":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed after the submission process completes. This function is typically used to perform post-submission tasks such as data processing, notifications, or cleanup operations.\n\n**Field behavior**\n- Invoked automatically after the main submission event finishes.\n- Can trigger additional workflows or side effects based on submission results.\n- Supports asynchronous execution depending on the implementation.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function within the execution context.\n- Validate that the function handles errors gracefully to avoid disrupting the overall submission flow.\n- Document the expected input parameters and output behavior of the function for maintainability.\n\n**Examples**\n- \"sendConfirmationEmail\"\n- \"logSubmissionData\"\n- \"updateUserStatus\"\n\n**Important notes**\n- The function must be idempotent if the submission process can be retried.\n- Avoid long-running operations within this function to prevent blocking the submission pipeline.\n- Security considerations should be taken into account, especially if the function interacts with external systems.\n\n**Dependency chain**\n- Depends on the successful completion of the submission event.\n- May rely on data generated or modified during the submission process.\n- Could trigger downstream hooks or events based on its execution.\n\n**Technical details**\n- Typically referenced by name as a string.\n- Execution context must have access to the function definition.\n- May support both synchronous and asynchronous invocation patterns depending on the platform."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the submission process completes. This ID links the post-submit hook to a specific script that performs additional operations or custom logic once the main submission workflow has finished.\n\n**Field behavior**\n- Specifies which script is triggered automatically after the submission event.\n- Must correspond to a valid and existing script within the system.\n- Controls post-processing actions such as notifications, data transformations, or logging.\n\n**Implementation guidance**\n- Ensure the script ID is correctly referenced and the script is deployed before assigning this property.\n- Validate the script’s permissions and runtime environment compatibility.\n- Use consistent naming or ID conventions to avoid conflicts or errors.\n\n**Examples**\n- \"script_12345\"\n- \"postSubmitCleanupScript\"\n- \"notifyUserScript\"\n\n**Important notes**\n- If the script ID is invalid or missing, the post-submit hook will not execute.\n- Changes to the script ID require redeployment or configuration updates.\n- The script should be idempotent and handle errors gracefully to avoid disrupting the submission flow.\n\n**Dependency chain**\n- Depends on the existence of the script resource identified by this ID.\n- Linked to the post-submit hook configuration within the submission workflow.\n- May interact with other hooks or system components triggered after submission.\n\n**Technical details**\n- Typically represented as a string identifier.\n- Must be unique within the scope of available scripts.\n- Used by the system to locate and invoke the corresponding script at runtime."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-submit hook. This ID is used to reference and manage the specific stack instance within the system, enabling operations such as updates, deletions, or status checks after a submission event.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used to link post-submit actions to the correct stack.\n- Typically immutable once assigned to ensure consistent referencing.\n- Required for executing stack-specific post-submit logic.\n\n**Implementation guidance**\n- Ensure the ID is generated following the system’s unique identification standards.\n- Validate the presence and format of the stack ID before processing post-submit hooks.\n- Use this ID to fetch or manipulate stack-related data during post-submit operations.\n- Handle cases where the stack ID may not exist or is invalid gracefully.\n\n**Examples**\n- \"stack-12345\"\n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/abcde12345\"\n- \"proj-stack-v2\"\n\n**Important notes**\n- This ID must correspond to an existing stack within the environment.\n- Incorrect or missing stack IDs can cause post-submit hooks to fail.\n- The stack ID is critical for audit trails and debugging post-submit processes.\n\n**Dependency chain**\n- Depends on the stack creation or registration process to generate the ID.\n- Used by post-submit hook handlers to identify the target stack.\n- May be referenced by logging, monitoring, or notification systems post-submission.\n\n**Technical details**\n- Typically a string value.\n- May follow a specific format such as UUID, ARN, or custom naming conventions.\n- Stored and transmitted securely to prevent unauthorized access or manipulation.\n- Should be indexed in databases for efficient retrieval during post-submit operations."},"configuration":{"type":"object","description":"Configuration settings for the post-submit hook that define its behavior and parameters. This object contains key-value pairs specifying how the hook should operate after a submission event, including any necessary options, flags, or environment variables.\n\n**Field behavior**\n- Determines the specific actions and parameters used by the post-submit hook.\n- Can include nested settings tailored to the hook’s functionality.\n- Modifies the execution flow or output based on provided configuration values.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema for the post-submit hook.\n- Support extensibility to allow additional parameters without breaking existing functionality.\n- Ensure sensitive information within the configuration is handled securely.\n- Provide clear error messages if required configuration fields are missing or invalid.\n\n**Examples**\n- Setting a timeout duration for the post-submit process.\n- Specifying environment variables needed during hook execution.\n- Enabling or disabling certain features of the post-submit hook via flags.\n\n**Important notes**\n- The configuration must align with the capabilities of the post-submit hook implementation.\n- Incorrect or incomplete configuration may cause the hook to fail or behave unexpectedly.\n- Configuration changes typically require validation and testing before deployment.\n\n**Dependency chain**\n- Depends on the post-submit hook being triggered successfully.\n- May interact with other hooks or system components based on configuration.\n- Relies on the underlying system to interpret and apply configuration settings correctly.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Supports nested objects and arrays for complex configurations.\n- May include string, numeric, boolean, or null values depending on parameters.\n- Parsed and applied at runtime during the post-submit hook execution phase."}},"description":"A callback function or hook that is executed immediately after a form submission process completes. This hook allows for custom logic to be run post-submission, such as handling responses, triggering notifications, updating UI elements, or performing cleanup tasks.\n\n**Field behavior**\n- Invoked only after the form submission has finished, regardless of success or failure.\n- Receives submission result data or error information as arguments.\n- Can be asynchronous to support operations like API calls or state updates.\n- Does not affect the submission process itself but handles post-submission side effects.\n\n**Implementation guidance**\n- Ensure the function handles both success and error scenarios gracefully.\n- Avoid long-running synchronous operations to prevent blocking the UI.\n- Use this hook to trigger any follow-up actions such as analytics tracking or user feedback.\n- Validate that the hook is defined before invocation to prevent runtime errors.\n\n**Examples**\n- Logging submission results to the console.\n- Displaying a success message or error notification to the user.\n- Redirecting the user to a different page after submission.\n- Resetting form fields or updating application state based on submission outcome.\n\n**Important notes**\n- This hook is optional; if not provided, no post-submission actions will be performed.\n- It should not be used to modify the submission data itself; that should be handled before submission.\n- Proper error handling within this hook is crucial to avoid unhandled exceptions.\n\n**Dependency chain**\n- Depends on the form submission process completing.\n- May interact with state management or UI components updated after submission.\n- Can be linked with pre-submit hooks for comprehensive form lifecycle management.\n\n**Technical details**\n- Typically implemented as a function accepting parameters such as submission response and error.\n- Can return a promise to support asynchronous operations.\n- Should be registered in the form configuration under the hooks.postSubmit property.\n- Execution context may vary depending on the form library or framework used."},"postAggregate":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the post-aggregation hook phase. This function is invoked after the aggregation process completes, allowing for custom processing, transformation, or validation of the aggregated data before it is returned or further processed. The function should be designed to handle the aggregated data structure and can modify, enrich, or analyze the results as needed.\n\n**Field behavior**\n- Executed after the aggregation operation finishes.\n- Receives the aggregated data as input.\n- Can modify or augment the aggregated results.\n- Supports custom logic tailored to specific post-processing requirements.\n\n**Implementation guidance**\n- Ensure the function signature matches the expected input and output formats.\n- Handle potential errors within the function to avoid disrupting the overall aggregation flow.\n- Keep the function efficient to minimize latency in the post-aggregation phase.\n- Validate the output of the function to maintain data integrity.\n\n**Examples**\n- A function that filters aggregated results based on certain criteria.\n- A function that calculates additional metrics from the aggregated data.\n- A function that formats or restructures the aggregated output for downstream consumption.\n\n**Important notes**\n- The function must be deterministic and side-effect free to ensure consistent results.\n- Avoid long-running operations within the function to prevent performance bottlenecks.\n- The function should not alter the original data source, only the aggregated output.\n\n**Dependency chain**\n- Depends on the completion of the aggregation process.\n- May rely on the schema or structure of the aggregated data.\n- Can be linked to subsequent hooks or processing steps that consume the modified data.\n\n**Technical details**\n- Typically implemented as a callback or lambda function.\n- May be written in the same language as the aggregation engine or as a supported scripting language.\n- Should conform to the API’s expected interface for hook functions.\n- Execution context may include metadata about the aggregation operation."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the aggregation process completes. This ID links the post-aggregation hook to a specific script that contains the logic or operations to be performed. It ensures that the correct script is triggered in the post-aggregation phase, enabling customized processing or data manipulation.\n\n**Field behavior**\n- Accepts a string representing the script's unique ID.\n- Used exclusively in the post-aggregation hook context.\n- Triggers the execution of the associated script after aggregation.\n- Must correspond to an existing and accessible script within the system.\n\n**Implementation guidance**\n- Validate that the script ID exists and is active before assignment.\n- Ensure the script linked by this ID is compatible with post-aggregation data.\n- Handle errors gracefully if the script ID is invalid or the script execution fails.\n- Maintain security by restricting script IDs to authorized scripts only.\n\n**Examples**\n- \"script12345\"\n- \"postAggScript_v2\"\n- \"cleanup_after_aggregation\"\n\n**Important notes**\n- The script ID must be unique within the scope of available scripts.\n- Changing the script ID will alter the behavior of the post-aggregation hook.\n- The script referenced should be optimized for performance to avoid delays.\n- Ensure proper permissions are set for the script to execute in this context.\n\n**Dependency chain**\n- Depends on the existence of a script repository or registry.\n- Relies on the aggregation process completing successfully before execution.\n- Interacts with the postAggregate hook mechanism to trigger execution.\n\n**Technical details**\n- Typically represented as a string data type.\n- May be stored in a database or configuration file referencing script metadata.\n- Used by the system's execution engine to locate and run the script.\n- Supports integration with scripting languages or environments supported by the platform."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-aggregation hook. This ID is used to reference and manage the specific stack context within which the hook operates, ensuring that the correct stack resources and configurations are applied during the post-aggregation process.\n\n**Field behavior**\n- Uniquely identifies the stack instance related to the post-aggregation hook.\n- Used to link the hook execution context to the appropriate stack environment.\n- Immutable once set to maintain consistency throughout the hook lifecycle.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be assigned automatically by the system or provided explicitly during hook configuration.\n- Ensure proper validation to prevent referencing non-existent or unauthorized stacks.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for resolving stack-specific resources and permissions.\n- Incorrect or missing _stackId values can lead to hook execution failures.\n- Should be handled securely to prevent unauthorized access to stack data.\n\n**Dependency chain**\n- Depends on the stack management system to generate and maintain stack IDs.\n- Utilized by the post-aggregation hook execution engine to retrieve stack context.\n- May influence downstream processes that rely on stack-specific configurations.\n\n**Technical details**\n- Typically represented as a string.\n- Format and length may vary depending on the stack management system.\n- Should be indexed or cached for efficient lookup during hook execution."},"configuration":{"type":"object","description":"Configuration settings for the post-aggregate hook that define its behavior and parameters during execution. This property allows customization of the hook's operation by specifying key-value pairs or structured data relevant to the hook's logic.\n\n**Field behavior**\n- Accepts structured data or key-value pairs that tailor the hook's functionality.\n- Influences how the post-aggregate hook processes data after aggregation.\n- Can be optional or required depending on the specific hook implementation.\n- Supports dynamic adjustment of hook behavior without code changes.\n\n**Implementation guidance**\n- Validate configuration inputs to ensure they conform to expected formats and types.\n- Document all configurable options clearly for users to understand their impact.\n- Allow for extensibility to accommodate future configuration parameters.\n- Ensure secure handling of configuration data to prevent injection or misuse.\n\n**Examples**\n- {\"threshold\": 10, \"mode\": \"strict\"}\n- {\"enableLogging\": true, \"retryCount\": 3}\n- {\"filters\": [\"status:active\", \"region:us-east\"]}\n\n**Important notes**\n- Incorrect configuration values may cause the hook to malfunction or produce unexpected results.\n- Configuration should be versioned or tracked to maintain consistency across deployments.\n- Sensitive information should not be included in configuration unless properly encrypted or secured.\n\n**Dependency chain**\n- Depends on the specific post-aggregate hook implementation to interpret configuration.\n- May interact with other hook properties such as conditions or triggers.\n- Configuration changes can affect downstream processing or output of the aggregation.\n\n**Technical details**\n- Typically represented as a JSON object or map structure.\n- Parsed and applied at runtime when the post-aggregate hook is invoked.\n- Supports nested structures for complex configuration scenarios.\n- Should be compatible with the overall API schema and validation rules."}},"description":"A hook function that is executed after the aggregate operation has completed. This hook allows for custom logic to be applied to the aggregated results before they are returned or further processed. It is typically used to modify, filter, or augment the aggregated data based on specific application requirements.\n\n**Field behavior**\n- Invoked immediately after the aggregate query execution.\n- Receives the aggregated results as input.\n- Can modify or replace the aggregated data.\n- Supports asynchronous operations if needed.\n- Does not affect the execution of the aggregate operation itself.\n\n**Implementation guidance**\n- Ensure the hook handles errors gracefully to avoid disrupting the response.\n- Use this hook to implement custom transformations or validations on aggregated data.\n- Avoid heavy computations to maintain performance.\n- Return the modified data or the original data if no changes are needed.\n- Consider security implications when modifying aggregated results.\n\n**Examples**\n- Filtering out sensitive fields from aggregated results.\n- Adding computed properties based on aggregation output.\n- Logging or auditing aggregated data for monitoring purposes.\n- Transforming aggregated data into a different format before sending to clients.\n\n**Important notes**\n- This hook is specific to aggregate operations and will not trigger on other query types.\n- Modifications in this hook do not affect the underlying database.\n- The hook should return the final data to be used downstream.\n- Properly handle asynchronous code to ensure the hook completes before response.\n\n**Dependency chain**\n- Triggered after the aggregate query execution phase.\n- Can influence the data passed to subsequent hooks or response handlers.\n- Dependent on the successful completion of the aggregate operation.\n\n**Technical details**\n- Typically implemented as a function accepting the aggregated results and context.\n- Supports both synchronous and asynchronous function signatures.\n- Receives parameters such as the aggregated data, query context, and hook metadata.\n- Expected to return the processed aggregated data or a promise resolving to it."}},"description":"Defines a collection of hook functions or callbacks that are triggered at specific points during the execution lifecycle of a process or operation. These hooks allow customization and extension of default behavior by executing user-defined logic before, after, or during certain events.\n\n**Field behavior**\n- Supports multiple hook functions, each associated with a particular event or lifecycle stage.\n- Hooks are invoked automatically by the system when their corresponding event occurs.\n- Can be used to modify input data, handle side effects, perform validations, or trigger additional processes.\n- Execution order of hooks may be sequential or parallel depending on implementation.\n\n**Implementation guidance**\n- Ensure hooks are registered with clear event names or identifiers.\n- Validate hook functions for correct signature and expected parameters.\n- Provide error handling within hooks to avoid disrupting the main process.\n- Document available hook points and expected behavior for each.\n- Allow hooks to be asynchronous if the environment supports it.\n\n**Examples**\n- A \"beforeSave\" hook that validates data before saving to a database.\n- An \"afterFetch\" hook that formats or enriches data after retrieval.\n- A \"beforeDelete\" hook that checks permissions before allowing deletion.\n- A \"onError\" hook that logs errors or sends notifications.\n\n**Important notes**\n- Hooks should not introduce significant latency or side effects that impact core functionality.\n- Avoid circular dependencies or infinite loops caused by hooks triggering each other.\n- Security considerations must be taken into account when executing user-defined hooks.\n- Hooks may have access to sensitive data; ensure proper access controls.\n\n**Dependency chain**\n- Hooks depend on the lifecycle events or triggers defined by the system.\n- Hook execution may depend on the successful completion of prior steps.\n- Hook results can influence subsequent processing stages or final outcomes.\n\n**Technical details**\n- Typically implemented as functions, methods, or callbacks registered in a map or list.\n- May support synchronous or asynchronous execution models.\n- Can accept parameters such as context objects, event data, or state information.\n- Return values from hooks may be used to modify behavior or data flow.\n- Often integrated with event emitter or observer patterns."},"sampleResponseData":{"type":"object","description":"Contains example data representing a typical response returned by the API endpoint. This sample response data helps developers understand the structure, format, and types of values they can expect when interacting with the API. It serves as a reference for testing, debugging, and documentation purposes.\n\n**Field behavior**\n- Represents a static example of the API's response payload.\n- Should reflect the most common or typical response scenario.\n- May include nested objects, arrays, and various data types to illustrate the response structure.\n- Does not affect the actual API behavior or response generation.\n\n**Implementation guidance**\n- Ensure the sample data is accurate and up-to-date with the current API response schema.\n- Include all required fields and typical optional fields to provide a comprehensive example.\n- Use realistic values that clearly demonstrate the data format and constraints.\n- Update the sample response whenever the API response structure changes.\n\n**Examples**\n- A JSON object showing user details returned from a user info endpoint.\n- An array of product items with fields like id, name, price, and availability.\n- A nested object illustrating a complex response with embedded metadata and links.\n\n**Important notes**\n- This data is for illustrative purposes only and should not be used as actual input or output.\n- It helps consumers of the API to quickly grasp the expected response format.\n- Should be consistent with the API specification and schema definitions.\n\n**Dependency chain**\n- Depends on the API endpoint's response schema and data model.\n- Should align with any validation rules or constraints defined for the response.\n- May be linked to example requests or other documentation elements for completeness.\n\n**Technical details**\n- Typically formatted as a JSON or XML snippet matching the API's response content type.\n- May include placeholder values or sample identifiers.\n- Should be parsable and valid according to the API's response schema."},"responseTransform":{"allOf":[{"description":"Data transformation configuration for reshaping API response data during import operations.\n\n**Import-specific behavior**\n\n**Response Transformation**: Transforms the response data returned by the destination system after records have been imported. This allows you to reshape, filter, or enrich the response before it is processed by downstream flow steps or returned to the client.\n\n**Common Use Cases**:\n- Extracting relevant fields from verbose API responses\n- Normalizing response formats across different destination systems\n- Adding computed fields based on response data\n- Filtering out sensitive information before logging or further processing\n"},{"$ref":"#/components/schemas/Transform"}]},"modelMetadata":{"type":"object","description":"Contains detailed metadata about the model, including its version, architecture, training data characteristics, and any relevant configuration parameters that define its behavior and capabilities. This information helps in understanding the model's provenance, performance expectations, and compatibility with various tasks or environments.\n**Field behavior**\n- Captures comprehensive details about the model's identity and configuration.\n- May include version numbers, architecture type, training dataset descriptions, and hyperparameters.\n- Used for tracking model lineage and ensuring reproducibility.\n- Supports validation and compatibility checks before deployment or usage.\n**Implementation guidance**\n- Ensure metadata is accurate and up-to-date with the current model iteration.\n- Include standardized fields for easy parsing and comparison across models.\n- Use consistent formatting and naming conventions for metadata attributes.\n- Allow extensibility to accommodate future metadata elements without breaking compatibility.\n**Examples**\n- Model version: \"v1.2.3\"\n- Architecture: \"Transformer-based, 12 layers, 768 hidden units\"\n- Training data: \"Dataset XYZ, 1 million samples, balanced classes\"\n- Hyperparameters: \"learning_rate=0.001, batch_size=32\"\n**Important notes**\n- Metadata should be immutable once the model is finalized to ensure traceability.\n- Sensitive information such as proprietary training data details should be handled carefully.\n- Metadata completeness directly impacts model governance and audit processes.\n- Incomplete or inaccurate metadata can lead to misuse or misinterpretation of the model.\n**Dependency chain**\n- Relies on accurate model training and version control systems.\n- Interacts with deployment pipelines that validate model compatibility.\n- Supports monitoring systems that track model performance over time.\n**Technical details**\n- Typically represented as a structured object or JSON document.\n- May include nested fields for complex metadata attributes.\n- Should be easily serializable and deserializable for storage and transmission.\n- Can be linked to external documentation or repositories for extended information."},"mapping":{"type":"object","properties":{"fields":{"type":"string","enum":["string","number","boolean","numberarray","stringarray","json"],"description":"Defines the collection of individual field mappings within the overall mapping configuration. Each entry specifies the characteristics, data types, and indexing options for a particular field in the dataset or document structure. This property enables precise control over how each field is interpreted, stored, and queried by the system.\n\n**Field behavior**\n- Contains a set of key-value pairs where each key is a field name and the value is its mapping definition.\n- Determines how data in each field is processed, indexed, and searched.\n- Supports nested fields and complex data structures.\n- Can include settings such as data type, analyzers, norms, and indexing options.\n\n**Implementation guidance**\n- Define all relevant fields explicitly to optimize search and storage behavior.\n- Use consistent naming conventions for field names.\n- Specify appropriate data types to ensure correct parsing and querying.\n- Include nested mappings for objects or arrays as needed.\n- Validate field definitions to prevent conflicts or errors.\n\n**Examples**\n- Mapping a text field with a custom analyzer.\n- Defining a date field with a specific format.\n- Specifying a keyword field for exact match searches.\n- Creating nested object fields with their own sub-fields.\n\n**Important notes**\n- Omitting fields may lead to default dynamic mapping behavior, which might not be optimal.\n- Incorrect field definitions can cause indexing errors or unexpected query results.\n- Changes to field mappings often require reindexing of existing data.\n- Field names should avoid reserved characters or keywords.\n\n**Dependency chain**\n- Depends on the overall mapping configuration context.\n- Influences indexing and search components downstream.\n- Interacts with analyzers, tokenizers, and query parsers.\n\n**Technical details**\n- Typically represented as a JSON or YAML object with field names as keys.\n- Each field mapping includes properties like \"type\", \"index\", \"analyzer\", \"fields\", etc.\n- Supports complex types such as objects, nested, geo_point, and geo_shape.\n- May include metadata fields for internal use."},"lists":{"type":"string","enum":["string","number","boolean","numberarray","stringarray"],"description":"A collection of lists that define specific groupings or categories used within the mapping context. Each list contains a set of related items or values that are referenced to organize, filter, or map data effectively. These lists facilitate structured data handling and improve the clarity and maintainability of the mapping configuration.\n\n**Field behavior**\n- Contains multiple named lists, each representing a distinct category or grouping.\n- Lists can be referenced elsewhere in the mapping to apply consistent logic or transformations.\n- Supports dynamic or static content depending on the mapping requirements.\n- Enables modular and reusable data definitions within the mapping.\n\n**Implementation guidance**\n- Ensure each list has a unique identifier or name for clear referencing.\n- Populate lists with relevant and validated items to avoid mapping errors.\n- Use lists to centralize repeated values or categories to simplify updates.\n- Consider the size and complexity of lists to maintain performance and readability.\n\n**Examples**\n- A list of country codes used for regional mapping.\n- A list of product categories for classification purposes.\n- A list of status codes to standardize state representation.\n- A list of user roles for access control mapping.\n\n**Important notes**\n- Lists should be kept up-to-date to reflect current data requirements.\n- Avoid duplication of items across different lists unless intentional.\n- The structure and format of list items must align with the overall mapping schema.\n- Changes to lists may impact dependent mapping logic; test thoroughly after updates.\n\n**Dependency chain**\n- Lists may depend on external data sources or configuration files.\n- Other mapping properties or rules may reference these lists for validation or transformation.\n- Updates to lists can cascade to affect downstream processing or output.\n\n**Technical details**\n- Typically represented as arrays or collections within the mapping schema.\n- Items within lists can be simple values (strings, numbers) or complex objects.\n- Supports nesting or hierarchical structures if the schema allows.\n- May include metadata or annotations to describe list purpose or usage."}},"description":"Defines the association between input data fields and their corresponding target fields or processing rules within the system. This mapping specifies how data should be transformed, routed, or interpreted during processing to ensure accurate and consistent handling. It serves as a blueprint for data translation and integration tasks, enabling seamless interoperability between different data formats or components.\n\n**Field behavior**\n- Maps source fields to target fields or processing instructions.\n- Determines how input data is transformed or routed.\n- Can include nested or hierarchical mappings for complex data structures.\n- Supports conditional or dynamic mapping rules based on input values.\n\n**Implementation guidance**\n- Ensure mappings are clearly defined and validated to prevent data loss or misinterpretation.\n- Use consistent naming conventions for source and target fields.\n- Support extensibility to accommodate new fields or transformation rules.\n- Provide mechanisms for error handling when mappings fail or are incomplete.\n\n**Examples**\n- Mapping a JSON input field \"user_name\" to a database column \"username\".\n- Defining a transformation rule that converts date formats from \"MM/DD/YYYY\" to \"YYYY-MM-DD\".\n- Routing data from an input field \"status\" to different processing modules based on its value.\n\n**Important notes**\n- Incorrect or incomplete mappings can lead to data corruption or processing errors.\n- Mapping definitions should be version-controlled to track changes over time.\n- Consider performance implications when applying complex or large-scale mappings.\n\n**Dependency chain**\n- Dependent on the structure and schema of input data.\n- Influences downstream data processing, validation, and storage components.\n- May interact with transformation, validation, and routing modules.\n\n**Technical details**\n- Typically represented as key-value pairs, dictionaries, or mapping objects.\n- May support expressions or functions for dynamic value computation.\n- Can be serialized in formats such as JSON, YAML, or XML for configuration.\n- Should include metadata for data types, optionality, and default values where applicable."},"mappings":{"allOf":[{"description":"Field mapping configuration for transforming incoming records during import operations.\n\n**Import-specific behavior**\n\n**Data Transformation**: Mappings define how fields from incoming records are transformed and mapped to the destination system's field structure. This enables data normalization, field renaming, value transformation, and complex nested object construction.\n\n**Common Use Cases**:\n- Mapping source field names to destination field names\n- Transforming data types and formats\n- Building nested object structures required by the destination\n- Applying formulas and lookups to derive field values\n"},{"$ref":"#/components/schemas/Mappings"}]},"lookups":{"type":"string","description":"A collection of predefined reference data or mappings used to standardize and validate input values within the system. Lookups typically consist of key-value pairs or enumerations that facilitate consistent data interpretation and reduce errors by providing a controlled vocabulary or set of options.\n\n**Field behavior**\n- Serves as a centralized source for reference data used across various components.\n- Enables validation and normalization of input values against predefined sets.\n- Supports dynamic retrieval of lookup values to accommodate updates without code changes.\n- May include hierarchical or categorized data structures for complex reference data.\n\n**Implementation guidance**\n- Ensure lookups are comprehensive and cover all necessary reference data for the application domain.\n- Design lookups to be easily extendable and maintainable, allowing additions or modifications without impacting existing functionality.\n- Implement caching mechanisms if lookups are frequently accessed to optimize performance.\n- Provide clear documentation for each lookup entry to aid developers and users in understanding their purpose.\n\n**Examples**\n- Country codes and names (e.g., \"US\" -> \"United States\").\n- Status codes for order processing (e.g., \"PENDING\", \"SHIPPED\", \"DELIVERED\").\n- Currency codes and symbols (e.g., \"USD\" -> \"$\").\n- Product categories or types used in an inventory system.\n\n**Important notes**\n- Lookups should be kept up-to-date to reflect changes in business rules or external standards.\n- Avoid hardcoding lookup values in multiple places; centralize them to maintain consistency.\n- Consider localization requirements if lookup values need to be presented in multiple languages.\n- Validate input data against lookups to prevent invalid or unsupported values from entering the system.\n\n**Dependency chain**\n- Often used by input validation modules, UI dropdowns, reporting tools, and business logic components.\n- May depend on external data sources or configuration files for initialization.\n- Can influence downstream processing and data storage by enforcing standardized values.\n\n**Technical details**\n- Typically implemented as dictionaries, maps, or database tables.\n- May support versioning to track changes over time.\n- Can be exposed via APIs for dynamic retrieval by client applications.\n- Should include mechanisms for synchronization if distributed across multiple systems."},"settingsForm":{"$ref":"#/components/schemas/Form"},"preSave":{"$ref":"#/components/schemas/PreSave"},"settings":{"$ref":"#/components/schemas/Settings"}}},"As2":{"type":"object","description":"Configuration for As2 imports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"The unique identifier for the Trading Partner (TP) Connector utilized within the AS2 communication framework. This identifier serves as a critical reference that links the AS2 configuration to the specific connector responsible for the secure transmission and reception of AS2 messages. It ensures that all message exchanges are accurately routed through the designated connector, thereby maintaining the integrity, security, and reliability of the communication process. This ID is fundamental for establishing, managing, and terminating AS2 sessions, enabling proper encryption, digital signing, and delivery confirmation of messages. It acts as a key element in the orchestration of AS2 workflows, ensuring seamless interoperability between trading partners.\n\n**Field behavior**\n- Uniquely identifies a TP Connector within the trading partner ecosystem.\n- Associates AS2 message exchanges with the appropriate connector configuration.\n- Essential for initiating, maintaining, and terminating AS2 communication sessions.\n- Immutable once set to preserve consistent routing and session management.\n- Validated against existing connectors to prevent misconfiguration.\n\n**Implementation guidance**\n- Must correspond to a valid, active connector ID registered in the trading partner management system.\n- Should be assigned during initial AS2 setup and remain unchanged unless a deliberate reconfiguration is required.\n- Implement validation checks to confirm the ID’s existence and compatibility with AS2 protocols before assignment.\n- Ensure synchronization between the connector ID and its configuration to avoid communication failures.\n- Handle updates cautiously, with appropriate notifications and fallback mechanisms to maintain session continuity.\n\n**Examples**\n- \"connector-12345\"\n- \"tp-connector-67890\"\n- \"as2-connector-001\"\n\n**Important notes**\n- Incorrect or invalid IDs can lead to failed message transmissions or misrouted communications.\n- The referenced connector must be fully configured to support AS2 standards, including security and protocol settings.\n- Changes to this identifier should be managed through controlled processes to prevent disruption of active AS2 sessions.\n- This ID is integral to the AS2 message lifecycle, impacting message encryption, signing, and delivery confirmation.\n\n**Dependency chain**\n- Relies on the existence and proper configuration of a TP Connector entity within the trading partner management system.\n- Interacts closely with AS2 configuration parameters that govern message handling, security credentials, and session management.\n- Dependent on system components responsible for connector registration, validation, and lifecycle management.\n\n**Technical details**\n- Represented as a string conforming to the identifier format defined by the trading partner management system.\n- Used internally by the AS2"},"fileNameTemplate":{"type":"string","description":"fileNameTemplate specifies the template string used to dynamically generate file names for AS2 message payloads during both transmission and receipt processes. This template allows the inclusion of customizable placeholders that are replaced at runtime with relevant contextual values such as timestamps, unique message identifiers, sender and receiver IDs, or other metadata. By leveraging these placeholders, the system ensures that generated file names are unique, descriptive, and meaningful, facilitating efficient file management, traceability, and downstream processing. The template is crucial for systematically organizing files, preventing naming conflicts, supporting audit trails, and enabling seamless integration with various file storage and transfer mechanisms.\n\n**Field behavior**\n- Defines the naming pattern and structure for AS2 message payload files.\n- Supports dynamic substitution of placeholders with runtime metadata values.\n- Ensures uniqueness and clarity in generated file names to prevent collisions.\n- Directly influences file storage organization, retrieval efficiency, and processing workflows.\n- Affects audit trails and logging by enforcing consistent and meaningful file naming conventions.\n- Applies uniformly to both outgoing and incoming AS2 message payloads to maintain consistency.\n- Allows flexibility to adapt file naming to organizational standards and compliance requirements.\n\n**Implementation guidance**\n- Use clear, descriptive placeholders such as {timestamp}, {messageId}, {senderId}, {receiverId}, and {date} to capture relevant metadata.\n- Validate templates to exclude invalid or unsupported characters based on the target file system’s constraints.\n- Incorporate date and time components to improve uniqueness and enable chronological sorting of files.\n- Explicitly include file extensions (e.g., .xml, .edi, .txt) to reflect the payload format and ensure compatibility with downstream systems.\n- Provide sensible default templates to maintain system functionality when custom templates are not specified.\n- Implement robust parsing and substitution logic to reliably handle all supported placeholders and edge cases.\n- Sanitize substituted values to prevent injection of unsafe, invalid, or reserved characters that could cause errors.\n- Consider locale and timezone settings when generating date/time placeholders to ensure consistency across systems.\n- Support escape sequences or delimiter customization to handle special characters within templates if necessary.\n\n**Examples**\n- \"AS2_{senderId}_{timestamp}.xml\"\n- \"{messageId}_{receiverId}_{date}.edi\"\n- \"payload_{timestamp}.txt\"\n- \"msg_{date}_{senderId}_{receiverId}_{messageId}.xml\"\n- \"AS2_{timestamp}_{messageId}.payload\"\n- \"inbound_{receiverId}_{date}_{messageId}.xml\"\n-"},"messageIdTemplate":{"type":"string","description":"messageIdTemplate is a customizable template string designed to generate the Message-ID header for AS2 messages, enabling the creation of unique, meaningful, and standards-compliant identifiers for each message. This template supports dynamic placeholders that are replaced with actual runtime values—such as timestamps, UUIDs, sequence numbers, sender-specific information, or other contextual data—ensuring that every Message-ID is globally unique and strictly adheres to RFC 5322 email header formatting standards. By defining the precise format of the Message-ID, this property plays a critical role in message tracking, logging, troubleshooting, auditing, and interoperability within AS2 communication workflows, facilitating reliable message correlation and lifecycle management.\n\n**Field behavior**\n- Specifies the exact format and structure of the Message-ID header in AS2 protocol messages.\n- Supports dynamic placeholders that are substituted with real-time values during message creation.\n- Guarantees global uniqueness of each Message-ID to prevent duplication and enable accurate message identification.\n- Directly influences message correlation, tracking, diagnostics, and auditing processes across AS2 systems.\n- Ensures compliance with email header syntax and AS2 protocol requirements to maintain interoperability.\n- Affects downstream processing by systems that rely on Message-ID for message lifecycle management.\n\n**Implementation guidance**\n- Use a clear and consistent placeholder syntax (e.g., `${placeholderName}`) for dynamic content insertion.\n- Incorporate unique elements such as UUIDs, timestamps, sequence numbers, sender domains, or other identifiers to guarantee global uniqueness.\n- Validate the generated Message-ID string against AS2 protocol specifications and RFC 5322 email header standards.\n- Ensure the final Message-ID is properly formatted, enclosed in angle brackets (`< >`), and free from invalid or disallowed characters.\n- Avoid embedding sensitive or confidential information within the Message-ID to mitigate potential security risks.\n- Thoroughly test the template to confirm compatibility with receiving AS2 systems and message processing components.\n- Consider the impact of the template on downstream systems that rely on Message-ID for message correlation and tracking.\n- Maintain consistency in template usage across environments to support reliable message auditing and troubleshooting.\n\n**Examples**\n- `<${uuid}@example.com>` — generates a Message-ID combining a UUID with a domain name.\n- `<${timestamp}.${sequence}@as2sender.com>` — includes a timestamp and sequence number for ordered uniqueness.\n- `<msg-${messageNumber}@${senderDomain}>` — uses message number and sender domain placeholders for traceability.\n- `<${date}-${uuid}@company"},"maxRetries":{"type":"number","description":"Maximum number of retry attempts for the AS2 message transmission in case of transient failures such as network errors, timeouts, or temporary unavailability of the recipient system. This setting controls how many times the system will automatically attempt to resend a message after an initial failure before marking the transmission as failed. It applies exclusively to retry attempts following the first transmission and does not influence the initial send operation.\n\n**Field behavior**\n- Specifies the total count of resend attempts after the initial transmission failure.\n- Retries are triggered only for recoverable, transient errors where subsequent attempts have a reasonable chance of success.\n- Retry attempts cease once the configured maximum number is reached, resulting in a final failure status.\n- Does not affect the initial message send; governs only the retry logic after the first failure.\n- Retry attempts are typically spaced out using backoff strategies to prevent overwhelming the network or recipient system.\n- Each retry is initiated only after the previous attempt has conclusively failed.\n- The retry mechanism respects configured backoff intervals and timing constraints to optimize network usage and avoid congestion.\n\n**Implementation guidance**\n- Select a balanced default value to avoid excessive retries that could cause delays or strain system resources.\n- Implement exponential backoff or incremental delay strategies between retries to reduce network congestion and improve success rates.\n- Ensure retry attempts comply with overall operation timeouts and do not exceed system or protocol limits.\n- Validate input to accept only non-negative integers; setting zero disables retries entirely.\n- Integrate retry logic with error detection, logging, and alerting mechanisms for comprehensive failure handling and monitoring.\n- Consider the impact of retries on message ordering and idempotency to prevent duplicate processing or inconsistent states.\n- Provide clear feedback and detailed logging for each retry attempt to facilitate troubleshooting and operational transparency.\n\n**Examples**\n- maxRetries: 3 — The system will attempt to resend the message up to three times after the initial failure before giving up.\n- maxRetries: 0 — No retries will be performed; the failure is reported immediately after the first unsuccessful attempt.\n- maxRetries: 5 — Allows up to five retry attempts, increasing the chance of successful delivery in unstable network conditions.\n- maxRetries: 1 — A single retry attempt is made, suitable for environments with low tolerance for delays.\n\n**Important notes**\n- Excessive retry attempts can increase latency and consume additional system and network resources.\n- Retry behavior should align with AS2 protocol standards and best practices to maintain compliance and interoperability.\n- This setting improves"},"headers":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the MIME type of the AS2 message content, indicating the format of the data being transmitted. This helps the receiving system understand how to process and interpret the message payload.\n\n**Field behavior**\n- Defines the content type of the AS2 message payload.\n- Used by the receiver to determine how to parse and handle the message.\n- Typically corresponds to standard MIME types such as \"application/edi-x12\", \"application/edi-consent\", or \"application/xml\".\n- May influence processing rules, such as encryption, compression, or validation steps.\n\n**Implementation guidance**\n- Ensure the value is a valid MIME type string.\n- Use standard MIME types relevant to AS2 transmissions.\n- Avoid custom or non-standard MIME types unless both sender and receiver explicitly support them.\n- Validate the type value against expected content formats in the AS2 communication agreement.\n\n**Examples**\n- \"application/edi-x12\"\n- \"application/edi-consent\"\n- \"application/xml\"\n- \"text/plain\"\n\n**Important notes**\n- The type field is critical for interoperability between AS2 trading partners.\n- Incorrect or missing type values can lead to message processing errors or rejection.\n- This field is part of the AS2 message headers and should conform to AS2 protocol specifications.\n\n**Dependency chain**\n- Depends on the content of the AS2 message payload.\n- Influences downstream processing components that handle message parsing and validation.\n- May be linked with other headers such as content-transfer-encoding or content-disposition.\n\n**Technical details**\n- Represented as a string in MIME type format.\n- Included in the AS2 message headers under the \"Content-Type\" header.\n- Should comply with RFC 2045 and RFC 2046 MIME standards."},"value":{"type":"string","description":"The value of the AS2 header, representing the content associated with the specified header name.\n\n**Field behavior**\n- Holds the actual content or data for the AS2 header identified by the corresponding header name.\n- Can be a string or any valid data type supported by the AS2 protocol for header values.\n- Used during the construction or parsing of AS2 messages to specify header details.\n\n**Implementation guidance**\n- Ensure the value conforms to the expected format and encoding for AS2 headers.\n- Validate the value against any protocol-specific constraints or length limits.\n- When setting this value, consider the impact on message integrity and compliance with AS2 standards.\n- Support dynamic assignment to accommodate varying header requirements.\n\n**Examples**\n- \"application/edi-x12\"\n- \"12345\"\n- \"attachment; filename=\\\"invoice.xml\\\"\"\n- \"UTF-8\"\n\n**Important notes**\n- The value must be compatible with the corresponding header name to maintain protocol correctness.\n- Incorrect or malformed values can lead to message rejection or processing errors.\n- Some headers may require specific formatting or encoding (e.g., MIME types, character sets).\n\n**Dependency chain**\n- Dependent on the `as2.headers.name` property which specifies the header name.\n- Influences the overall AS2 message headers structure and processing logic.\n\n**Technical details**\n- Typically represented as a string in the AS2 message headers.\n- May require encoding or escaping depending on the header content.\n- Should comply with RFC 4130 and related AS2 specifications for header formatting."},"_id":{"type":"object","description":"Unique identifier for the AS2 message header.\n\n**Field behavior**\n- Serves as a unique key to identify the AS2 message header within the system.\n- Typically assigned automatically by the system or provided by the client to ensure message traceability.\n- Used to correlate messages and track message processing status.\n\n**Implementation guidance**\n- Should be a string value that uniquely identifies the header.\n- Must be immutable once assigned to prevent inconsistencies.\n- Should follow a consistent format (e.g., UUID, GUID) to avoid collisions.\n- Ensure uniqueness across all AS2 message headers in the system.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"AS2Header_001\"\n- \"msg-header-20240427-0001\"\n\n**Important notes**\n- This field is critical for message tracking and auditing.\n- Avoid using easily guessable or sequential values to enhance security.\n- If not provided, the system should generate a unique identifier automatically.\n\n**Dependency chain**\n- May be referenced by other components or logs that track message processing.\n- Related to message metadata and processing status fields.\n\n**Technical details**\n- Data type: string\n- Format: typically UUID or a unique string identifier\n- Immutable after creation\n- Indexed for efficient lookup in databases or message stores"},"default":{"type":["object","null"],"description":"A boolean property that specifies whether the current header should be used as the default header in the AS2 message configuration. When set to true, this header will be applied automatically unless overridden by a more specific header setting.\n\n**Field behavior**\n- Determines if the header is the default choice for AS2 message headers.\n- If true, this header is applied automatically in the absence of other specific header configurations.\n- If false or omitted, the header is not considered the default and must be explicitly specified to be used.\n\n**Implementation guidance**\n- Use this property to designate a fallback or standard header configuration for AS2 messages.\n- Ensure only one header is marked as default to avoid conflicts.\n- Validate the boolean value to accept only true or false.\n- Consider the impact on message routing and processing when setting this property.\n\n**Examples**\n- `default: true` — This header is the default and will be used unless another header is specified.\n- `default: false` — This header is not the default and must be explicitly selected.\n\n**Important notes**\n- Setting multiple headers as default may lead to unpredictable behavior.\n- The default header is applied only when no other header overrides are present.\n- This property is optional; if omitted, the header is not treated as default.\n\n**Dependency chain**\n- Related to other header properties within `as2.headers`.\n- Influences message header selection logic in AS2 message processing.\n- May interact with routing or profile configurations that specify header usage.\n\n**Technical details**\n- Data type: boolean\n- Default value: false (if omitted)\n- Located under the `as2.headers` object in the configuration schema\n- Used during AS2 message construction to determine header application"}},"description":"A collection of HTTP headers included in the AS2 message transmission, serving as essential metadata and control information that governs the accurate handling, routing, authentication, and processing of the AS2 message between trading partners. These headers must strictly adhere to standard HTTP header formats and encoding rules, encompassing mandatory fields such as Content-Type, Message-ID, AS2-From, AS2-To, and other protocol-specific headers critical for AS2 communication. Proper configuration and validation of these headers are vital to maintaining message integrity, security (including encryption and digital signatures), and ensuring successful interoperability within the AS2 ecosystem. This collection supports both standard and custom headers as required by specific trading partner agreements or business needs, with header names treated as case-insensitive per HTTP standards.  \n**Field behavior:**  \n- Represents key-value pairs of HTTP headers transmitted alongside the AS2 message.  \n- Directly influences message routing, identification, authentication, security, and processing by the receiving system.  \n- Supports inclusion of both mandatory AS2 headers and additional custom headers as dictated by trading partner agreements or specific business requirements.  \n- Header names are case-insensitive, consistent with HTTP/1.1 specifications.  \n- Facilitates communication of protocol-specific instructions, security parameters (e.g., MIC, Disposition-Notification-To), and message metadata essential for AS2 transactions.  \n- Enables conveying information necessary for message encryption, signing, and receipt acknowledgments.  \n**Implementation guidance:**  \n- Ensure all header names and values conform strictly to HTTP/1.1 header syntax, character encoding, and escaping rules.  \n- Include all mandatory AS2 headers such as AS2-From, AS2-To, Message-ID, Content-Type, and security-related headers like MIC and Disposition-Notification-To.  \n- Avoid duplicate headers unless explicitly allowed by the AS2 specification or trading partner agreements.  \n- Validate header values rigorously for correctness, format compliance, and security to prevent injection attacks or protocol violations.  \n- Consider the impact of headers on message security features including encryption, digital signatures, and receipt acknowledgments.  \n- Maintain consistency with trading partner agreements regarding required, optional, and custom headers to ensure seamless interoperability.  \n- Use appropriate character encoding and escape sequences to handle special or non-ASCII characters within header values.  \n**Examples:**  \n- Content-Type: application/edi-x12  \n- AS2-From: PartnerA  \n- AS2-To: PartnerB  \n- Message-ID: <"}}},"Dynamodb":{"type":"object","description":"Configuration for Dynamodb exports","properties":{"region":{"type":"string","description":"Specifies the AWS region where the DynamoDB service is hosted and where all database operations will be executed. This property defines the precise geographical location of the DynamoDB instance, which is essential for optimizing application latency, ensuring compliance with data residency and sovereignty regulations, and aligning with organizational infrastructure strategies. Selecting the appropriate region helps minimize network latency by placing data physically closer to end-users or application servers, thereby improving performance and responsiveness. Additionally, the chosen region influences service availability, pricing structures, feature sets, and legal compliance requirements, making it a critical configuration parameter. The region value must be a valid AWS region identifier (such as \"us-east-1\" or \"eu-west-1\") and is used to construct the correct service endpoint URL for all DynamoDB interactions. Proper configuration of this property is vital for seamless integration with AWS SDKs, authentication mechanisms, and IAM policies that may be region-specific, ensuring secure and efficient access to DynamoDB resources.\n\n**Field behavior**\n- Determines the AWS data center location for all DynamoDB operations.\n- Directly impacts network latency, data residency, and compliance adherence.\n- Must be specified as a valid and supported AWS region code.\n- Influences the endpoint URL, authentication endpoints, and service routing.\n- Affects service availability, pricing, feature availability, and latency.\n- Remains consistent throughout the lifecycle of the application unless explicitly changed.\n\n**Implementation guidance**\n- Validate the region against the current list of supported AWS regions to prevent misconfiguration and runtime errors.\n- Use the region value to dynamically construct the DynamoDB service endpoint URL and configure SDK clients accordingly.\n- Allow the region to be configurable and overrideable to support testing environments, multi-region deployments, disaster recovery, or failover scenarios.\n- Ensure consistency of the region setting across all AWS service configurations within the application to avoid cross-region conflicts and data inconsistency.\n- Consider the implications of region selection on data sovereignty laws, compliance requirements, and organizational policies.\n- Monitor AWS announcements for new regions or changes in service availability to keep configurations up to date.\n\n**Examples**\n- \"us-east-1\" (Northern Virginia, USA)\n- \"eu-west-1\" (Ireland, Europe)\n- \"ap-southeast-2\" (Sydney, Australia)\n- \"ca-central-1\" (Canada Central)\n- \"sa-east-1\" (São Paulo, Brazil)\n\n**Important notes**\n- Selecting the correct region is crucial for optimizing application performance, cost efficiency,"},"method":{"type":"string","enum":["putItem","updateItem"],"description":"Method specifies the HTTP method to be used for the DynamoDB API request, such as GET, POST, PUT, DELETE, etc. This determines the type of operation to be performed on the DynamoDB service and how the request is processed. It defines the HTTP verb that dictates the action on the resource, typically aligning with CRUD operations (Create, Read, Update, Delete). While DynamoDB primarily uses POST requests for most operations, other methods may be applicable in limited scenarios. The chosen method directly impacts the request structure, headers, payload formatting, and response handling, ensuring that the request conforms to the expected protocol and operation semantics.\n\n**Field behavior**\n- Defines the HTTP verb for the API call, influencing the nature and intent of the operation.\n- Determines how the DynamoDB service interprets and processes the incoming request.\n- Typically corresponds to CRUD operations but is predominantly POST for DynamoDB API interactions.\n- Must be compatible with the DynamoDB API endpoint and the specific operation requirements.\n- Influences the construction of the HTTP request line, headers, and payload format.\n- Affects response handling and error processing based on the HTTP method semantics.\n\n**Implementation guidance**\n- Validate that the method value is one of the HTTP methods supported by DynamoDB, with POST being the standard and recommended method.\n- Use POST for the majority of DynamoDB operations, including queries, updates, and inserts.\n- Ensure the selected method aligns precisely with the intended DynamoDB operation to avoid protocol mismatches.\n- Handle method-specific headers, payload formats, and any protocol nuances, such as content-type and authorization headers.\n- Avoid unsupported or non-standard HTTP methods to prevent request failures or unexpected behavior.\n- Implement robust error handling for cases where the method is incompatible with the requested operation.\n\n**Examples**\n- POST (commonly used and recommended for all DynamoDB API operations)\n- GET (rarely used, potentially for specific read-only or diagnostic operations, though not standard for DynamoDB)\n- DELETE (used for deleting items or tables in DynamoDB, though typically encapsulated within POST requests)\n- PUT (generally not used in the DynamoDB API context but common in other RESTful APIs)\n\n**Important notes**\n- DynamoDB API predominantly supports POST; other HTTP methods may be unsupported or have limited applicability.\n- Using an incorrect HTTP method can cause request rejection, errors, or unintended side effects.\n- The method must strictly adhere to the DynamoDB API specification and AWS service requirements to ensure successful communication"},"tableName":{"type":"string","description":"The name of the DynamoDB table to be accessed or manipulated, serving as the primary identifier for routing API requests to the correct data store. This string must exactly match the name of an existing DynamoDB table within the specified AWS account and region, including case sensitivity. It is essential for performing operations such as reading, writing, updating, or deleting items within the table, ensuring that all data interactions target the intended resource accurately.\n\n**Field behavior**\n- Specifies the exact DynamoDB table targeted for all data operations.\n- Must be unique within the AWS account and region to prevent ambiguity.\n- Directs API calls to the appropriate table for the requested action.\n- Case-sensitive and must strictly adhere to DynamoDB naming conventions.\n- Acts as a critical routing parameter in API requests to identify the data source.\n\n**Implementation guidance**\n- Confirm the table name matches the existing DynamoDB table in AWS, respecting case sensitivity.\n- Avoid using reserved words or disallowed special characters to prevent API errors.\n- Validate the table name format programmatically before making API calls.\n- Manage table names through environment variables or configuration files to facilitate deployment across multiple environments (development, staging, production).\n- Ensure all application components referencing the table name are updated consistently if the table name changes.\n- Incorporate error handling to manage cases where the specified table does not exist or is inaccessible.\n\n**Examples**\n- \"Users\"\n- \"Orders2024\"\n- \"Inventory_Table\"\n- \"CustomerData\"\n- \"Sales-Data.2024\"\n\n**Important notes**\n- DynamoDB table names can be up to 255 characters in length.\n- Table names are case-sensitive and must be used consistently across all API calls.\n- The specified table must exist in the targeted AWS region before any operation is performed.\n- Changing the table name requires updating all dependent code and redeploying applications as necessary.\n- Incorrect table names will result in API errors such as ResourceNotFoundException.\n\n**Dependency chain**\n- Dependent on the AWS account and region context where DynamoDB is accessed.\n- Requires appropriate IAM permissions to perform operations on the specified table.\n- Works in conjunction with other DynamoDB parameters such as key schema, attribute definitions, and provisioned throughput settings.\n- Relies on network connectivity and AWS service availability for successful API interactions.\n\n**Technical details**\n- Represented as a string value.\n- Must conform to DynamoDB naming rules: allowed characters include alphanumeric characters, underscore (_), hy"},"partitionKey":{"type":"string","description":"partitionKey specifies the attribute name used as the primary partition key in a DynamoDB table. This key is essential for uniquely identifying each item within the table and determines the partition where the data is physically stored. It plays a fundamental role in data distribution, scalability, and query efficiency by enabling DynamoDB to hash the key value and allocate items across multiple storage partitions. The partition key must be present in every item and is required for all operations involving item creation, retrieval, update, and deletion. Selecting an appropriate partition key with high cardinality ensures even data distribution, prevents performance bottlenecks, and supports efficient query patterns. When combined with an optional sort key, it forms a composite primary key that enables more complex data models and range queries.\n\n**Field behavior**\n- Defines the primary attribute used to partition and organize data in a DynamoDB table.\n- Must have unique values within the context of the partition to ensure item uniqueness.\n- Drives the distribution of data across partitions to optimize performance and scalability.\n- Required for all item-level operations such as put, get, update, and delete.\n- Works in tandem with an optional sort key to form a composite primary key for more complex data models.\n\n**Implementation guidance**\n- Select an attribute with a high cardinality (many distinct values) to promote even data distribution and avoid hot partitions.\n- Ensure the partition key attribute is included in every item stored in the table.\n- Analyze application access patterns to choose a partition key that supports efficient queries and minimizes read/write contention.\n- Use a composite key (partition key + sort key) when you need to model one-to-many relationships or perform range queries.\n- Avoid using attributes with low variability or skewed distributions as partition keys to prevent performance bottlenecks.\n\n**Examples**\n- \"userId\" for applications where data is organized by individual users.\n- \"orderId\" in an e-commerce system to uniquely identify each order.\n- \"deviceId\" for storing telemetry data from IoT devices.\n- \"customerEmail\" when email addresses serve as unique identifiers.\n- \"regionCode\" combined with a sort key for geographic data partitioning.\n\n**Important notes**\n- The partition key is mandatory and immutable after table creation; changing it requires creating a new table.\n- The attribute type must be one of DynamoDB’s supported scalar types: String, Number, or Binary.\n- Partition key values are hashed internally by DynamoDB to determine the physical partition.\n- Using a poorly chosen partition key"},"sortKey":{"type":"string","description":"The attribute name designated as the sort key (also known as the range key) in a DynamoDB table. This key is essential for ordering and organizing items that share the same partition key, enabling efficient sorting, range queries, and precise retrieval of data within a partition. Together with the partition key, the sort key forms the composite primary key that uniquely identifies each item in the table. It plays a critical role in query operations by allowing filtering, sorting, and conditional retrieval based on its values, which can be of type String, Number, or Binary. Proper definition and usage of the sort key significantly enhance data access patterns, query performance, and cost efficiency in DynamoDB.\n\n**Field behavior**\n- Defines the attribute DynamoDB uses to sort and organize items within the same partition key.\n- Enables efficient range queries, ordered retrieval, and filtering of items.\n- Must be unique in combination with the partition key for each item to ensure item uniqueness.\n- Supports composite key structures when combined with the partition key.\n- Influences the physical storage order of items within a partition, optimizing query performance.\n\n**Implementation guidance**\n- Specify the attribute name exactly as defined in the DynamoDB table schema.\n- Ensure the attribute type matches the table definition (String, Number, or Binary).\n- Use this key to perform queries requiring sorting, range-based filtering, or conditional retrieval.\n- Omit or set to null if the DynamoDB table does not define a sort key.\n- Validate that the sort key attribute exists and is properly indexed in the table schema before use.\n- Consider the sort key design carefully to support intended query patterns and access efficiency.\n\n**Examples**\n- \"timestamp\"\n- \"createdAt\"\n- \"orderId\"\n- \"category#date\"\n- \"eventDate\"\n- \"userScore\"\n\n**Important notes**\n- The sort key is optional; some DynamoDB tables only have a partition key without a sort key.\n- Queries specifying both partition key and sort key conditions are more efficient and cost-effective.\n- The sort key attribute must be pre-defined in the table schema and cannot be added dynamically.\n- The combination of partition key and sort key must be unique for each item.\n- Sort key values influence the physical storage order of items within a partition, impacting query speed.\n- Using a well-designed sort key can reduce the need for additional indexes and improve scalability.\n\n**Dependency chain**\n- Requires a DynamoDB table configured with a composite primary key (partition key"},"itemDocument":{"type":"string","description":"The `itemDocument` property represents the complete and authoritative data record stored within a DynamoDB table. It is structured as a JSON object that comprehensively includes all attribute names and their corresponding values, fully reflecting the schema and data model of the DynamoDB item. This property encapsulates every attribute of the item, including mandatory primary keys, optional sort keys (if applicable), and any additional data fields, supporting complex nested objects, maps, lists, and arrays to accommodate DynamoDB’s rich data types. It serves as the primary representation of an item for all item-level operations such as reading (GetItem), writing (PutItem), updating (UpdateItem), and deleting (DeleteItem) within DynamoDB, ensuring data integrity and consistency throughout these processes.\n\n**Field behavior**\n- Contains the full and exact representation of a DynamoDB item as a structured JSON document.\n- Includes all required attributes such as primary keys and sort keys, as well as any optional or additional fields.\n- Supports complex nested data structures including maps, lists, and sets, consistent with DynamoDB’s flexible data model.\n- Acts as the main payload for CRUD (Create, Read, Update, Delete) operations on DynamoDB items.\n- Reflects either the current stored state or the intended new state of the item depending on the operation context.\n- Can represent partial documents when used with projection expressions or update operations that modify subsets of attributes.\n\n**Implementation guidance**\n- Ensure strict adherence to DynamoDB’s supported data types (String, Number, Binary, Boolean, Null, List, Map, Set) and attribute naming conventions.\n- Validate presence and correct formatting of primary key attributes to uniquely identify the item for key-based operations.\n- For update operations, provide either the entire item document or only the attributes to be modified, depending on the API method and update strategy.\n- Maintain consistent attribute names and data types across operations to preserve schema integrity and prevent runtime errors.\n- Utilize attribute projections or partial documents when full item data is unnecessary to optimize performance, reduce latency, and minimize cost.\n- Handle nested structures carefully to ensure proper serialization and deserialization between application code and DynamoDB.\n\n**Examples**\n- `{ \"UserId\": \"12345\", \"Name\": \"John Doe\", \"Age\": 30, \"Preferences\": { \"Language\": \"en\", \"Notifications\": true } }`\n- `{ \"OrderId\": \"A100\", \"Date\": \"2024-06-01\", \"Items\":"},"updateExpression":{"type":"string","description":"A string that defines the specific attributes to be updated, added, or removed within a DynamoDB item using DynamoDB's UpdateExpression syntax. This expression provides precise control over how item attributes are modified by including one or more of the following clauses: SET (to assign or overwrite attribute values), REMOVE (to delete attributes), ADD (to increment numeric attributes or add elements to sets), and DELETE (to remove elements from sets). The updateExpression enables atomic, conditional, and efficient updates without replacing the entire item, ensuring data integrity and minimizing write costs. It must be carefully constructed using placeholders for attribute names and values—ExpressionAttributeNames and ExpressionAttributeValues—to avoid conflicts with reserved keywords and prevent injection vulnerabilities. This expression is essential for performing complex update operations in a single request, supporting both simple attribute changes and advanced manipulations like incrementing counters or modifying sets.\n\n**Field behavior**\n- Specifies one or more atomic update operations on a DynamoDB item's attributes.\n- Supports combining multiple update actions (SET, REMOVE, ADD, DELETE) within a single expression, separated by spaces.\n- Requires strict adherence to DynamoDB's UpdateExpression syntax and semantics.\n- Works in conjunction with ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values.\n- Only affects the attributes explicitly mentioned; all other attributes remain unchanged.\n- Enables conditional updates when combined with ConditionExpression for safe concurrent modifications.\n\n**Implementation guidance**\n- Construct the updateExpression using valid DynamoDB syntax, incorporating clauses such as SET, REMOVE, ADD, and DELETE as needed.\n- Always use placeholders (e.g., '#attrName' for attribute names and ':value' for attribute values) to avoid conflicts with reserved words and enhance security.\n- Validate the expression syntax and ensure all referenced placeholders are defined before executing the update operation.\n- Combine multiple update actions by separating them with spaces within the same expression string.\n- Test expressions thoroughly to prevent runtime errors caused by syntax issues or missing placeholders.\n- Use SET for assigning or overwriting attribute values, REMOVE for deleting attributes, ADD for incrementing numbers or adding set elements, and DELETE for removing set elements.\n- Avoid using reserved keywords directly; always substitute them with ExpressionAttributeNames placeholders.\n\n**Examples**\n- \"SET #name = :newName, #age = :newAge REMOVE #obsoleteAttribute\"\n- \"ADD #score :incrementValue DELETE #tags :tagToRemove\"\n- \"SET #status = :statusValue\"\n- \"REMOVE #temporaryField"},"conditionExpression":{"type":"string","description":"A string that defines the conditional logic to be applied to a DynamoDB operation, specifying the precise criteria that must be met for the operation to proceed. This expression leverages DynamoDB's condition expression syntax to evaluate attributes of the targeted item, enabling fine-grained control over operations such as PutItem, UpdateItem, DeleteItem, and transactional writes. It supports a comprehensive set of logical operators (AND, OR, NOT), comparison operators (=, <, >, BETWEEN, IN), functions (attribute_exists, attribute_not_exists, begins_with, contains, size), and attribute path notations to construct complex, nested conditions. This ensures data integrity by preventing unintended modifications and enforcing business rules at the database level.\n\n**Field behavior**\n- Determines whether the DynamoDB operation executes based on the evaluation of the condition against the item's attributes.\n- Evaluated atomically before performing the operation; if the condition evaluates to false, the operation is aborted and no changes are made.\n- Supports combining multiple conditions using logical operators for complex, multi-attribute criteria.\n- Utilizes built-in functions to inspect attribute existence, value patterns, and sizes, enabling sophisticated conditional checks.\n- Applies exclusively to the specific item targeted by the operation, including nested attributes accessed via attribute path notation.\n- Influences transactional operations by ensuring all conditions in a transaction are met before committing.\n\n**Implementation guidance**\n- Always use ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values, avoiding conflicts with reserved keywords and improving readability.\n- Validate the syntax of the condition expression prior to request submission to prevent runtime errors and failed operations.\n- Carefully construct expressions to handle edge cases, such as attributes that may be missing or have null values.\n- Combine multiple conditions logically to enforce precise constraints and business logic on the operation.\n- Test condition expressions thoroughly across diverse data scenarios to ensure expected behavior and robustness.\n- Keep expressions as simple and efficient as possible to optimize performance and reduce request complexity.\n\n**Examples**\n- \"attribute_exists(#name) AND #age >= :minAge\"\n- \"#status = :activeStatus\"\n- \"attribute_not_exists(#id)\"\n- \"#score BETWEEN :low AND :high\"\n- \"begins_with(#title, :prefix)\"\n- \"contains(#tags, :tagValue) OR size(#comments) > :minComments\"\n\n**Important notes**\n- The conditionExpression is optional but highly recommended to safeguard against unintended overwrites, deletions, or updates.\n- If the condition"},"expressionAttributeNames":{"type":"string","description":"expressionAttributeNames is a map of substitution tokens used to safely reference attribute names within DynamoDB expressions. This property allows you to define placeholder tokens for attribute names that might otherwise cause conflicts due to being reserved words, containing special characters, or having names that are dynamically generated. By using these placeholders, you can avoid syntax errors and ambiguities in expressions such as ProjectionExpression, FilterExpression, UpdateExpression, and ConditionExpression, ensuring your queries and updates are both valid and maintainable.\n\n**Field behavior**\n- Functions as a dictionary where each key is a placeholder token starting with a '#' character, and each value is the actual attribute name it represents.\n- Enables the use of reserved words, special characters, or otherwise problematic attribute names within DynamoDB expressions.\n- Supports complex expressions by allowing multiple attribute name substitutions within a single expression.\n- Prevents syntax errors and improves clarity by explicitly mapping placeholders to attribute names.\n- Applies only within the scope of the expression where it is defined, ensuring localized substitution without affecting other parts of the request.\n\n**Implementation guidance**\n- Always prefix keys with a '#' to clearly indicate they are substitution tokens.\n- Ensure each placeholder token is unique within the scope of the expression to avoid conflicts.\n- Use expressionAttributeNames whenever attribute names are reserved words, contain special characters, or are dynamically generated.\n- Combine with expressionAttributeValues when expressions require both attribute name and value substitutions.\n- Verify that the attribute names used as values exactly match those defined in your DynamoDB table schema to prevent runtime errors.\n- Keep the mapping concise and relevant to the attributes used in the expression to maintain readability.\n- Avoid overusing substitution tokens for attribute names that do not require it, to keep expressions straightforward.\n\n**Examples**\n- `{\"#N\": \"Name\", \"#A\": \"Age\"}` to substitute attribute names \"Name\" and \"Age\" in an expression.\n- `{\"#status\": \"Status\"}` when \"Status\" is a reserved word or needs escaping in an expression.\n- `{\"#yr\": \"Year\", \"#mn\": \"Month\"}` for expressions involving multiple date-related attributes.\n- `{\"#addr\": \"Address\", \"#zip\": \"ZipCode\"}` to handle attribute names with special characters or spaces.\n- `{\"#P\": \"Price\", \"#Q\": \"Quantity\"}` used in an UpdateExpression to increment numeric attributes.\n\n**Important notes**\n- Keys must always begin with the '#' character; otherwise, the substitution"},"expressionAttributeValues":{"type":"string","description":"expressionAttributeValues is a map of substitution tokens for attribute values used within DynamoDB expressions. These tokens serve as placeholders that safely inject values into expressions such as ConditionExpression, FilterExpression, UpdateExpression, and KeyConditionExpression. By separating data from code, this mechanism prevents injection attacks and syntax errors, ensuring secure and reliable query and update operations. Each key in this map is a placeholder string that begins with a colon (\":\"), and each corresponding value is a DynamoDB attribute value object representing the actual data to be used in the expression. This design enforces that user input is never directly embedded into expressions, enhancing both security and maintainability.\n\n**Field behavior**\n- Contains key-value pairs where keys are placeholders prefixed with a colon (e.g., \":val1\") used within DynamoDB expressions.\n- Each key maps to a DynamoDB attribute value object specifying the data type and value, supporting all DynamoDB data types including strings, numbers, binaries, lists, maps, sets, and booleans.\n- Enables safe and dynamic substitution of attribute values in expressions, avoiding direct string concatenation and reducing the risk of injection vulnerabilities.\n- Mandatory when expressions reference attribute values to ensure correct parsing and execution of queries or updates.\n- Placeholders are case-sensitive and must exactly match those used in the corresponding expressions.\n- Supports complex data structures, allowing nested maps and lists as attribute values for advanced querying scenarios.\n\n**Implementation guidance**\n- Always prefix placeholder keys with a colon (\":\") to clearly identify them as substitution tokens within expressions.\n- Ensure every placeholder referenced in an expression has a corresponding entry in expressionAttributeValues to avoid runtime errors.\n- Format each value according to DynamoDB’s attribute value specification, such as { \"S\": \"string\" }, { \"N\": \"123\" }, { \"BOOL\": true }, or complex nested structures.\n- Avoid using reserved words, spaces, or invalid characters in placeholder names to prevent syntax errors and conflicts.\n- Validate that the data types of values align with the expected attribute types defined in the table schema to maintain data integrity.\n- When multiple expressions are used simultaneously, maintain unique and consistent placeholder names to prevent collisions and ambiguity.\n- Use expressionAttributeValues in conjunction with expressionAttributeNames when attribute names are reserved words or contain special characters to ensure proper expression parsing.\n- Consider the size limitations of expressionAttributeValues to avoid exceeding DynamoDB’s request size limits.\n\n**Examples**\n- { \":val1\": { \"S\": \""},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from DynamoDB data should be ignored or skipped during the operation, enabling conditional control over data retrieval to optimize performance, reduce latency, or avoid redundant processing in data workflows. This flag allows systems to bypass the extraction step when data is already available, cached, or deemed unnecessary for the current operation, thereby improving efficiency and resource utilization.  \n**Field** BEHAVIOR:  \n- When set to true, the extraction of data from DynamoDB is completely bypassed, preventing any data retrieval from the source.  \n- When set to false or omitted, the extraction process proceeds as normal, retrieving data for further processing.  \n- Primarily used to optimize workflows by skipping unnecessary extraction steps when data is already available or not needed.  \n- Influences the flow of data processing by controlling whether DynamoDB data is fetched or not.  \n- Affects downstream processing by potentially providing no new data input if extraction is skipped.  \n**Implementation guidance:**  \n- Accepts a boolean value: true to skip extraction, false to perform extraction.  \n- Ensure that downstream components can handle scenarios where no data is extracted due to this flag being true.  \n- Validate input to avoid unintended skipping of extraction that could cause data inconsistencies or stale results.  \n- Use this flag judiciously, especially in complex pipelines where data dependencies exist, to prevent breaking data integrity.  \n- Incorporate appropriate logging or monitoring to track when extraction is skipped for auditability and debugging.  \n**Examples:**  \n- ignoreExtract: true — extraction from DynamoDB is skipped, useful when data is cached or pre-fetched.  \n- ignoreExtract: false — extraction occurs normally, retrieving fresh data from DynamoDB.  \n- omit the property entirely — defaults to false, extraction proceeds as usual.  \n**Important notes:**  \n- Skipping extraction may result in incomplete, outdated, or stale data if subsequent operations rely on fresh DynamoDB data.  \n- This option should only be enabled when you are certain that extraction is unnecessary or redundant to avoid data inconsistencies.  \n- Consider the impact on data integrity, consistency, and downstream processing before enabling this flag.  \n- Misuse can lead to errors, unexpected behavior, or data quality issues in data workflows.  \n- Ensure that any caching or alternative data sources used in place of extraction are reliable and up-to-date.  \n**Dependency chain:**  \n- Requires a valid DynamoDB data source configuration to be relevant.  \n-"}}},"Http":{"type":"object","description":"Configuration for HTTP imports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the import to function properly. This is a required configuration\nfor all HTTP based imports, as determined by the connection associated with the import.\n","properties":{"formType":{"type":"string","enum":["assistant","http","rest","graph_ql","assistant_graphql"],"description":"The form type to use for the import.\n- **assistant**: The import is for a system that we have a assistant form for.  This is the default value if there is a connector available for the system.\n- **http**: The import is an HTTP import.\n- **rest**: This value is only used for legacy imports that are not supported by the new HTTP framework.\n- **graph_ql**: The import is a GraphQL import.\n- **assistant_graphql**: The import is for a system that we have a assistant form for and utilizes GraphQL.  This is the default value if there is a connector available for the system and the connector supports GraphQL.\n"},"type":{"type":"string","enum":["file","records"],"description":"Specifies the type of data being imported via HTTP.\n\n- **file**: The import handles raw file content (binary data, PDF, images, etc.)\n- **records**: The import handles structured record data (JSON objects, XML records, etc.) - most common\n\nMost HTTP imports use \"records\" type for standard REST API data imports.\nOnly use \"file\" type when importing raw file content that will be processed downstream.\n"},"requestMediaType":{"type":"string","enum":["xml","json","csv","urlencoded","form-data","octet-stream","plaintext"],"description":"Specifies the media type (content type) of the request body sent to the target API.\n\n- **json**: For JSON request bodies (Content-Type: application/json) - most common\n- **xml**: For XML request bodies (Content-Type: application/xml)\n- **csv**: For CSV data (Content-Type: text/csv)\n- **urlencoded**: For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- **form-data**: For multipart form data\n- **octet-stream**: For binary data\n- **plaintext**: For plain text content\n"},"_httpConnectorEndpointIds":{"type":"array","items":{"type":"string","format":"objectId"},"description":"Array of HTTP connector endpoint IDs to use for this import.\nMultiple endpoints can be specified for different request types or operations.\n"},"blobFormat":{"type":"string","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"],"description":"Character encoding format for blob/binary data imports.\nOnly relevant when type is \"file\" or when handling binary content.\n"},"batchSize":{"type":"integer","description":"Maximum number of records to submit per HTTP request.\n\n- REST services typically use batchSize=1 (one record per request)\n- REST batch endpoints or RPC/XML services can handle multiple records\n- Affects throughput and API rate limiting\n\nDefault varies by API - consult target API documentation for optimal value.\n"},"successMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in successful responses.\n\n- **json**: For JSON responses (most common)\n- **xml**: For XML responses\n- **plaintext**: For plain text responses\n"},"requestType":{"type":"array","items":{"type":"string","enum":["CREATE","UPDATE"]},"description":"Specifies the type of operation(s) this import performs.\n\n- **CREATE**: Creating new records\n- **UPDATE**: Updating existing records\n\nFor composite/upsert imports, specify both operations. The array is positionally\naligned with relativeURI and method arrays — each index maps to one operation.\nThe existingExtract field determines which operation is used at runtime.\n\nExample composite upsert:\n```json\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"method\": [\"PUT\", \"POST\"],\n\"relativeURI\": [\"/customers/{{{data.0.customerId}}}.json\", \"/customers.json\"],\n\"existingExtract\": \"customerId\"\n```\n"},"errorMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in error responses.\n\n- **json**: For JSON error responses (most common)\n- **xml**: For XML error responses\n- **plaintext**: For plain text error messages\n"},"relativeURI":{"type":"array","items":{"type":"string"},"description":"**CRITICAL: This MUST be an array of strings, NOT an object.**\n\n**Handlebars data context:** At runtime, `relativeURI` templates always render against the\n**pre-mapped** record — the original input record that arrives at this import step, before\nthe Import's mapping step is applied. This is intentional: URI construction usually needs\nbusiness identifiers that mappings may rename or remove.\n\nArray of relative URI path(s) for the import endpoint.\nCan include handlebars expressions like `{{field}}` for dynamic values.\n\n**Single-operation imports (one element)**\n```json\n\"relativeURI\": [\"/api/v1/customers\"]\n```\n\n**Composite/upsert imports (multiple elements — positionally aligned)**\nFor imports with both CREATE and UPDATE in requestType, relativeURI, method,\nand requestType arrays are **positionally aligned**. Each index corresponds\nto one operation. The existingExtract field determines which index is used at\nruntime: if the existingExtract field has a value → use the UPDATE index;\nif empty/missing → use the CREATE index.\n\nExample of a composite upsert:\n```json\n\"relativeURI\": [\"/customers/{{{data.0.shopifyCustomerId}}}.json\", \"/customers.json\"],\n\"method\": [\"PUT\", \"POST\"],\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"existingExtract\": \"shopifyCustomerId\"\n```\nHere index 0 is the UPDATE operation (PUT to a specific customer) and index 1 is\nthe CREATE operation (POST to the collection endpoint).\n\n**IMPORTANT**: For composite imports, use the `{{{data.0.fieldName}}}` handlebars\nsyntax (triple braces with data.0 prefix) to reference the existing record ID in\nthe UPDATE URI. Do NOT use `{{#if}}` conditionals in a single URI — use separate\narray elements instead.\n\n**Wrong format (DO not do THIS)**\n```json\n\"relativeURI\": {\"/api/v1/customers\": \"CREATE\"}\n```\n```json\n\"relativeURI\": [\"{{#if id}}/customers/{{id}}.json{{else}}/customers.json{{/if}}\"]\n```\n"},"method":{"type":"array","items":{"type":"string","enum":["GET","PUT","POST","PATCH","DELETE"]},"description":"HTTP method(s) used for import requests.\n\n- **POST**: Most common for creating new records\n- **PUT**: For full updates/replacements\n- **PATCH**: For partial updates\n- **DELETE**: For deletions\n- **GET**: Rarely used for imports\n\nFor composite/upsert imports, specify multiple methods positionally aligned\nwith relativeURI and requestType arrays. Each index maps to one operation.\n\nExample: `[\"PUT\", \"POST\"]` with `requestType: [\"UPDATE\", \"CREATE\"]` means\nindex 0 uses PUT for updates, index 1 uses POST for creates.\n"},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector endpoint being used (single endpoint)."},"body":{"type":"array","items":{"type":"object"},"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP imports.\n\nThis is an OPTIONAL field that should only be set in very specific, rare cases. For standard REST API imports\nthat send JSON data, this field MUST be left undefined because data transformation is handled through the\nImport's mapping step, not through this body template.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data imports, including:\n- REST API imports that send JSON records to create/update resources\n- APIs that accept JSON request bodies (the majority of modern APIs)\n- Any import where you want to use Import mappings to transform data\n- Standard CRUD operations (Create, Update, Delete) via REST APIs\n- GraphQL mutations that accept JSON input\n- Any import where the Import mapping step will handle data transformation\n\nExamples of imports that should have this field undefined:\n- \"Import customer data into Shopify\" → undefined (mappings handle the data transformation)\n- \"Create orders via custom REST API\" → undefined (mappings handle the data transformation)\n\n**KEY CONCEPT:** Imports have a dedicated \"Mapping\" step that transforms source data into the target\nformat. This is the standard, preferred approach. When the body field is set, the mapping step still runs\nfirst — the body template then renders against the **post-mapped** record instead of the destination API\nreceiving the mapped record as-is. This lets you post-process the mapped output into XML, SOAP envelopes,\nor other non-JSON shapes, but forces you to hand-template the entire request and is harder to maintain and debug.\n\n**When to set this field (RARE use CASE)**\n\nSet this field ONLY when:\n1. The target API requires XML or SOAP format (not JSON)\n2. You need a highly customized request structure that cannot be achieved through standard mappings\n3. You're working with a legacy API that requires very specific formatting\n4. The API documentation explicitly shows a complex XML/SOAP envelope structure\n5. When the user explicitly specifies a request body\n\nExamples of when to set body:\n- \"Import to SOAP web service\" → set body with XML/SOAP template\n- \"Send data to XML-only legacy API\" → set body with XML template\n- \"Complex SOAP envelope with multiple nested elements\" → set body with SOAP template\n- \"Create an import to /test with the body as {'key': 'value', 'key2': 'value2', ...}\"\n\n**Implementation details**\n\nWhen this field is undefined (default for most imports):\n- The system uses the Import's mapping step to transform data\n- Mappings provide a visual, intuitive way to map source fields to target fields\n- The mapped data is automatically sent as the request body\n- Easier to debug and maintain\n\nWhen this field is set:\n- The mapping step still runs — it produces a post-mapped record that is then fed into this body template\n- The template renders against the **post-mapped** record (use `{{record.<mappedField>}}` to reference mapping outputs)\n- You are responsible for shaping the final request body (XML/SOAP/custom JSON) by wrapping / restructuring the mapped data\n- Harder to maintain and debug than relying on mappings alone\n\n**Decision flowchart**\n\n1. Does the target API accept JSON request bodies?\n   → YES: Leave this field undefined (use mappings instead)\n2. Are you comfortable using the Import's mapping step?\n   → YES: Leave this field undefined\n3. Does the API require XML or SOAP format?\n   → YES: Set this field with appropriate XML/SOAP template\n4. Does the API require a highly unusual request structure that mappings can't handle?\n   → YES: Consider setting this field (but try mappings first)\n\nRemember: When in doubt, leave this field undefined. Almost all modern REST APIs work perfectly\nwith the standard mapping approach, and this is much easier to use and maintain.\n"},"existingExtract":{"type":"string","description":"Field name or JSON path used to determine if a record already exists in the destination,\nfor composite (upsert) imports that have both CREATE and UPDATE request types.\n\nWhen the import has requestType: [\"UPDATE\", \"CREATE\"] (or [\"CREATE\", \"UPDATE\"]):\n- If the field specified by existingExtract has a value in the incoming record,\n  the UPDATE operation (PUT) is used with the corresponding relativeURI and method.\n- If the field is empty or missing, the CREATE operation (POST) is used.\n\nThis field drives the upsert decision and must match a field that is populated by\nan upstream lookup or response mapping (e.g., a destination system ID like \"shopifyCustomerId\").\n\nIMPORTANT: Only set this field for composite/upsert imports that have BOTH CREATE and UPDATE\nin requestType. Do not set for single-operation imports.\n"},"ignoreExtract":{"type":"string","description":"JSON path to the field in the record that is used to determine if the record already exists.\nOnly used when ignoreMissing or ignoreExisting is set.\n"},"endPointBodyLimit":{"type":"integer","description":"Maximum size limit for the request body in bytes.\nUsed to enforce API-specific size constraints.\n"},"headers":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Custom HTTP headers to include with import requests.\n\nEach header has a name and value, which can include handlebars expressions.\n\nAt runtime, header value templates render against the **pre-mapped**\nrecord (the original input record, before the Import's mapping step).\nUse `{{record.<field>}}` to reference fields as they appear in the\nupstream source — mappings don't rename or drop fields for header\nevaluation.\n"},"response":{"type":"object","properties":{"resourcePath":{"type":"array","items":{"type":"string"},"description":"JSON path to the resource collection in the response.\nRequired when batchSize > 1.\n"},"resourceIdPath":{"type":"array","items":{"type":"string"},"description":"JSON path to the ID field in each resource.\nIf not specified, system looks for \"id\" or \"_id\" fields.\n"},"successPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates success.\nUsed for APIs that don't rely solely on HTTP status codes.\n"},"successValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at successPath that indicate success.\n"},"failPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates failure (even if status=200).\n"},"failValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at failPath that indicate failure.\n"},"errorPath":{"type":"string","description":"JSON path to error message in the response.\n"},"allowArrayforSuccessPath":{"type":"boolean","description":"Whether to allow array values for successPath evaluation.\n"},"hasHeader":{"type":"boolean","description":"Indicates if the first record in the response is a header row (for CSV responses).\n"}},"description":"Configuration for parsing and interpreting HTTP responses.\n"},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource for handling asynchronous import operations.\n\nUsed when the target API uses a \"fire-and-check-back\" pattern (HTTP 202, polling, etc.).\n"},"ignoreLookupName":{"type":"string","description":"Name of the lookup to use for checking resource existence.\nOnly used when ignoreMissing or ignoreExisting is set.\n"}}},"Ftp":{"type":"object","description":"Configuration for Ftp exports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Identifier for the third-party connector utilized in the FTP integration, serving as a unique and immutable reference that links the FTP configuration to an external connector service. This identifier is essential for accurately associating the FTP connection with the intended third-party system, enabling seamless and secure data transfer, synchronization, and management through the specified connector. It ensures that all FTP operations are correctly routed and managed via the external service, supporting reliable integration workflows and maintaining system integrity throughout the connection lifecycle.\n\n**Field behavior**\n- Uniquely identifies the third-party connector associated with the FTP configuration.\n- Acts as a key reference to bind FTP settings to a specific external connector service.\n- Required when creating, updating, or managing FTP connections via third-party integrations.\n- Typically immutable after initial assignment to preserve connection consistency.\n- Used by system components to route FTP operations through the correct external connector.\n\n**Implementation guidance**\n- Should be a string or numeric identifier that aligns with the third-party connector’s identification scheme.\n- Must be validated against the system’s registry of available connectors to ensure validity.\n- Should be set securely and protected from unauthorized modification.\n- Updates to this identifier should be handled cautiously, with appropriate validation and impact assessment.\n- Ensure the identifier complies with any formatting or naming conventions imposed by the third-party service.\n- Transmit and store this identifier securely, using encryption where applicable.\n\n**Examples**\n- \"connector_12345\"\n- \"tpConnector-67890\"\n- \"ftpThirdPartyConnector01\"\n- \"extConnector_abcde\"\n- \"thirdPartyFTP_001\"\n\n**Important notes**\n- This field is critical for correctly mapping FTP configurations to their corresponding third-party services.\n- An incorrect or missing _tpConnectorId can lead to connection failures, data misrouting, or integration errors.\n- Synchronization between this identifier and the third-party connector records must be maintained to avoid inconsistencies.\n- Changes to this field may require re-authentication or reconfiguration of the FTP integration.\n- Proper error handling should be implemented to manage cases where the identifier does not match any registered connector.\n\n**Dependency chain**\n- Depends on the existence and registration of third-party connectors within the system.\n- Referenced by FTP connection management modules, authentication services, and integration workflows.\n- Interacts with authorization mechanisms to ensure secure access to third-party services.\n- May influence logging, monitoring, and auditing processes related to FTP integrations.\n\n**Technical details**\n- Data type: string (or"},"directoryPath":{"type":"string","description":"The path to the directory on the FTP server where files will be accessed, stored, or managed. This path can be specified either as an absolute path starting from the root directory of the FTP server or as a relative path based on the user's initial directory upon login, depending on the server's configuration and permissions. It is essential that the path adheres to the FTP server's directory structure, naming conventions, and access controls to ensure successful file operations such as uploading, downloading, listing, or deleting files. The directoryPath serves as the primary reference point for all file-related commands during an FTP session, determining the scope and context of file management activities.\n\n**Field behavior**\n- Defines the target directory location on the FTP server for all file-related operations.\n- Supports both absolute and relative path formats, interpreted according to the FTP server’s setup.\n- Must comply with the server’s directory hierarchy, naming rules, and access permissions.\n- Used by the FTP client to navigate and perform operations within the specified directory.\n- Influences the scope of accessible files and subdirectories during FTP sessions.\n- Changes to this path affect subsequent file operations until updated or reset.\n\n**Implementation guidance**\n- Validate the directory path format to ensure compatibility with the FTP server, typically using forward slashes (`/`) as separators.\n- Normalize the path to remove redundant elements such as `.` (current directory) and `..` (parent directory) to prevent errors and security vulnerabilities.\n- Handle scenarios where the specified directory does not exist by either creating it if permissions allow or returning a clear, descriptive error message.\n- Properly encode special characters and escape sequences in the path to conform with FTP protocol requirements and server expectations.\n- Provide informative and user-friendly error messages when the path is invalid, inaccessible, or lacks sufficient permissions.\n- Sanitize input to prevent directory traversal attacks or unauthorized access to restricted areas.\n- Consider server-specific behaviors such as case sensitivity, symbolic links, and virtual directories when interpreting the path.\n- Ensure that path changes are atomic and consistent to avoid race conditions during concurrent FTP operations.\n\n**Examples**\n- `/uploads/images`\n- `documents/reports/2024`\n- `/home/user/data`\n- `./backup`\n- `../shared/resources`\n- `/var/www/html`\n- `projects/current`\n\n**Important notes**\n- Access to the specified directory depends on the credentials used during FTP authentication.\n- Directory permissions directly affect the ability to read, write, or list files within the"},"fileName":{"type":"string","description":"The name of the file to be accessed, transferred, or manipulated via FTP (File Transfer Protocol). This property specifies the exact filename, including its extension, uniquely identifying the file within the FTP directory. It is critical for accurately targeting the file during operations such as upload, download, rename, or delete. The filename must be precise and comply with the naming conventions and restrictions of the FTP server’s underlying file system to avoid errors. Case sensitivity depends on the server’s operating system, and special attention should be given to character encoding, especially when handling non-ASCII or international characters. The filename should not include any directory path separators, as these are managed separately in directory or path properties.\n\n**Field behavior**\n- Identifies the specific file within the FTP server directory for various file operations.\n- Must include the file extension to ensure precise identification.\n- Case sensitivity is determined by the FTP server’s operating system.\n- Excludes directory or path information; only the filename itself is specified.\n- Used in conjunction with directory or path properties to locate the full file path.\n\n**Implementation guidance**\n- Validate that the filename contains only characters allowed by the FTP server’s file system.\n- Ensure the filename is a non-empty, well-formed string without path separators.\n- Normalize case if required to match server-specific case sensitivity rules.\n- Properly handle encoding for special or international characters to prevent transfer errors.\n- Combine with directory or path properties to construct the complete file location.\n- Avoid embedding directory paths or separators within the filename property.\n\n**Default behavior**\n- When the user does not specify a file naming convention, default to timestamped filenames using {{timestamp}} (e.g., \"items-{{timestamp}}.csv\"). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Examples**\n- \"report.pdf\"\n- \"image_2024.png\"\n- \"backup_2023_12_31.zip\"\n- \"data.csv\"\n- \"items-{{timestamp}}.csv\"\n\n**Important notes**\n- The filename alone may not be sufficient to locate the file without the corresponding directory or path.\n- Different FTP servers and operating systems may have varying rules regarding case sensitivity.\n- Path separators such as \"/\" or \"\\\" must not be included in the filename.\n- Adherence to FTP server naming conventions and restrictions is essential to prevent errors.\n- Be mindful of potential filename length limits imposed by the FTP server or protocol.\n\n**Dependency chain**\n- Depends on FTP directory or path properties to specify the full file location.\n- Utilized alongside FTP server credentials and connection settings.\n- May be influenced by file transfer mode (binary or ASCII) based on the file type.\n\n**Technical details**\n- Data type: string\n- Must conform to the FTP server’s"},"inProgressFileName":{"type":"string","description":"inProgressFileName is the designated temporary filename assigned to a file during the FTP upload process to clearly indicate that the file transfer is currently underway and not yet complete. This naming convention serves as a safeguard to prevent other systems or processes from prematurely accessing, processing, or locking the file before the upload finishes successfully. Typically, once the upload is finalized, the file is renamed to its intended permanent filename, signaling that it is ready for further use or processing. Utilizing an in-progress filename helps maintain data integrity, avoid conflicts, and streamline workflow automation in environments where files are continuously transferred and monitored. It plays a critical role in managing file state visibility, ensuring that incomplete or partially transferred files are not mistakenly processed, which could lead to data corruption or inconsistent system behavior.\n\n**Field behavior**\n- Acts as a temporary marker for files actively being uploaded via FTP or similar protocols.\n- Prevents other processes, systems, or automated workflows from accessing or processing the file until the upload is fully complete.\n- The file is renamed atomically from the inProgressFileName to the final intended filename upon successful upload completion.\n- Helps manage concurrency, avoid file corruption, partial reads, or race conditions during file transfer.\n- Indicates the current state of the file within the transfer lifecycle, providing clear visibility into upload progress.\n- Supports cleanup mechanisms by identifying orphaned or stale temporary files resulting from interrupted uploads.\n\n**Implementation guidance**\n- Choose a unique, consistent, and easily identifiable naming pattern distinct from final filenames to clearly denote the in-progress status.\n- Commonly use suffixes or prefixes such as \".inprogress\", \".tmp\", \"_uploading\", or similar conventions that are recognized across systems.\n- Ensure the FTP server and client support atomic renaming operations to avoid partial file visibility or race conditions.\n- Verify that the chosen temporary filename does not collide with existing files in the target directory to prevent overwriting or confusion.\n- Maintain consistent naming conventions across all related systems, scripts, and automation tools for clarity and maintainability.\n- Implement robust cleanup routines to detect and remove orphaned or stale in-progress files from failed or abandoned uploads.\n- Coordinate synchronization between upload completion and renaming to prevent premature processing or file locking issues.\n\n**Examples**\n- \"datafile.csv.inprogress\"\n- \"upload.tmp\"\n- \"report_20240615.tmp\"\n- \"file_upload_inprogress\"\n- \"transaction_12345.uploading\"\n\n**Important notes**\n- This filename is strictly temporary and should never be"},"backupDirectoryPath":{"type":"string","description":"backupDirectoryPath specifies the file system path to the directory where backup files are stored during FTP operations. This directory serves as the designated location for saving copies of files before they are overwritten or deleted, ensuring that original data can be recovered if necessary. The path can be either absolute or relative, depending on the system context, and must conform to the operating system’s file path conventions. It is essential that this path is valid, accessible, and writable by the FTP service or application performing the backups to guarantee reliable backup creation and data integrity. Proper configuration of this path is critical to prevent data loss and to maintain a secure and organized backup environment.  \n**Field behavior:**  \n- Specifies the target directory for storing backup copies of files involved in FTP transactions.  \n- Activated only when backup functionality is enabled within the FTP process.  \n- Requires the path to be valid, accessible, and writable to ensure successful backup creation.  \n- Supports both absolute and relative paths, with relative paths resolved against a defined base directory.  \n- If unset or empty, backup operations may be disabled or fallback to a default directory if configured.  \n**Implementation guidance:**  \n- Validate the existence of the directory and verify write permissions before initiating backups.  \n- Normalize paths to handle trailing slashes, redundant separators, and platform-specific path formats.  \n- Provide clear and actionable error messages if the directory is invalid, inaccessible, or lacks necessary permissions.  \n- Support environment variables or placeholders for dynamic path resolution where applicable.  \n- Enforce security best practices by restricting access to the backup directory to authorized users or processes only.  \n- Ensure sufficient disk space is available in the backup directory to accommodate backup files without interruption.  \n- Implement concurrency controls such as file locking or synchronization if multiple backup operations may occur simultaneously.  \n**Examples:**  \n- \"/var/ftp/backups\"  \n- \"C:\\\\ftp\\\\backup\"  \n- \"./backup_files\"  \n- \"/home/user/ftp_backup\"  \n**Important notes:**  \n- The backup directory must have adequate storage capacity to prevent backup failures due to insufficient space.  \n- Permissions must allow the FTP process or service to create and modify files within the directory.  \n- Backup files may contain sensitive or confidential data; appropriate security measures should be enforced to protect them.  \n- Path formatting should align with the conventions of the underlying operating system to avoid errors.  \n- Failure to properly configure this path can result in loss of backup data or inability to recover"}}},"Jdbc":{"type":"object","description":"Configuration for JDBC import operations. Defines how data is written to a database\nvia a JDBC connection.\n\n**Query type determines which fields are required**\n\n| queryType        | Required fields              | Do NOT set        |\n|------------------|------------------------------|-------------------|\n| [\"per_record\"]   | query (array of SQL strings) | bulkInsert        |\n| [\"per_page\"]     | query (array of SQL strings) | bulkInsert        |\n| [\"bulk_insert\"]  | bulkInsert object            | query             |\n| [\"bulk_load\"]    | bulkLoad object              | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]\nWrong: \"INSERT INTO users ...\"\nWrong: [{\"query\": \"INSERT INTO users ...\"}]\n","properties":{"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"] or [\"per_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars {{fieldName}} syntax to inject values from incoming records.\n\n**Format — array of strings**\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n\n**Examples**\n- INSERT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{qty}} WHERE sku = '{{sku}}'\"]\n- MERGE/UPSERT: [\"MERGE INTO target USING (SELECT CAST(? AS VARCHAR) AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = ? WHEN NOT MATCHED THEN INSERT (name, email) VALUES (?, ?)\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{name}}'\n- Numbers: no quotes — {{quantity}}\n"},"queryType":{"type":"array","items":{"type":"string","enum":["bulk_insert","per_record","per_page","bulk_load"]},"description":"Execution strategy for the SQL operation. REQUIRED. Must be an array with one value.\n\n**Decision tree**\n\n1. If UPDATE or UPSERT/MERGE → [\"per_record\"] (set query field)\n2. If INSERT with \"ignore existing\" / \"skip duplicates\" / match logic → [\"per_record\"] (set query field)\n3. If pure INSERT with no duplicate checking → [\"bulk_insert\"] (set bulkInsert object)\n4. If high-volume bulk load → [\"bulk_load\"] (set bulkLoad object)\n\n**Critical relationship to other fields**\n| queryType        | REQUIRES              | DO NOT SET   |\n|------------------|-----------------------|--------------|\n| [\"per_record\"]   | query (array)         | bulkInsert   |\n| [\"per_page\"]     | query (array)         | bulkInsert   |\n| [\"bulk_insert\"]  | bulkInsert object     | query        |\n| [\"bulk_load\"]    | bulkLoad object       | query        |\n\n**Examples**\n- Per-record upsert: [\"per_record\"]\n- Bulk insert: [\"bulk_insert\"]\n- Bulk load: [\"bulk_load\"]\n"},"bulkInsert":{"type":"object","description":"Bulk insert configuration. REQUIRED when queryType is [\"bulk_insert\"]. DO NOT SET when queryType is [\"per_record\"].\n\nEnables efficient batch insertion of records into a database table without per-record SQL.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk insert. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"batchSize":{"type":"string","description":"Number of records per batch during bulk insert.\nLarger values improve throughput but use more memory.\nCommon values: \"1000\", \"5000\".\n"}}},"bulkLoad":{"type":"object","description":"Bulk load configuration. REQUIRED when queryType is [\"bulk_load\"]. Uses database-native bulk loading for maximum throughput.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk load. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"primaryKeys":{"type":["array","null"],"items":{"type":"string"},"description":"Primary key column names for upsert/merge during bulk load.\nWhen set, existing rows matching these keys are updated; non-matching rows are inserted.\nExample: [\"id\"] or [\"order_id\", \"product_id\"] for composite keys.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Lookup name, referenced in field mappings or ignore logic."},"query":{"type":"string","description":"SQL query for the lookup (e.g., \"SELECT id FROM users WHERE email = '{{email}}'\")."},"extract":{"type":"string","description":"Path in the lookup result to use as the value (e.g., \"id\")."},"_lookupCacheId":{"type":"string","format":"objectId","description":"Reference to a cached lookup for performance optimization."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup queries executed against the JDBC connection to resolve reference values.\nEach lookup runs a SQL query and extracts a value to use in field mappings.\n"}}},"Mongodb":{"type":"object","description":"Configuration for MongoDB imports. This schema defines ONLY the MongoDB-specific configuration properties.\n\n**IMPORTANT:** Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level. When generating this configuration, return ONLY the properties defined here (method, collection, ignoreExtract, etc.).","properties":{"method":{"type":"string","enum":["insertMany","updateOne"],"description":"The HTTP method to be used for the MongoDB API request, specifying the type of operation to perform on the database resource. This property dictates how the server interprets and processes the request, directly influencing the action taken on MongoDB collections or documents. Common HTTP methods include GET for retrieving data, POST for creating new entries, PUT or PATCH for updating existing documents, and DELETE for removing records. Each method determines the structure of the request payload, the expected response format, and the side effects on the database. Proper selection of the method ensures alignment with RESTful principles, maintains idempotency where applicable, and supports secure and predictable interactions with the MongoDB service.\n\n**Field behavior**\n- Defines the intended CRUD operation (Create, Read, Update, Delete) on MongoDB resources.\n- Controls server-side processing logic, affecting data retrieval, insertion, modification, or deletion.\n- Influences the presence, format, and content of request bodies and the nature of responses returned.\n- Determines idempotency and safety characteristics of the API call, impacting retry and error handling strategies.\n- Affects authorization and authentication requirements based on the sensitivity of the operation.\n- Impacts caching behavior and network efficiency depending on the method used.\n\n**Implementation guidance**\n- Use standard HTTP methods consistent with RESTful API design: GET for reading data, POST for creating new documents, PUT or PATCH for updating existing documents, and DELETE for removing documents.\n- Validate the method against a predefined set of allowed HTTP methods to ensure API consistency and prevent unsupported operations.\n- Ensure the selected method accurately reflects the intended database operation to avoid unintended data modifications or errors.\n- Consider the idempotency of each method when designing client-side logic, especially for update and delete operations.\n- Include appropriate headers, query parameters, and request body content as required by the specific method and MongoDB operation.\n- Implement robust error handling and validation to manage unsupported or incorrect method usage gracefully.\n- Document method-specific requirements and constraints clearly for API consumers.\n\n**Examples**\n- GET: Retrieve one or more documents from a MongoDB collection without modifying data.\n- POST: Insert a new document into a collection, creating a new resource.\n- PUT: Replace an entire existing document with a new version, ensuring full update.\n- PATCH: Apply partial updates to specific fields within an existing document.\n- DELETE: Remove a document or multiple documents from a collection permanently.\n\n**Important notes**\n- The HTTP method must be compatible with the MongoDB operation"},"collection":{"type":"string","description":"Specifies the exact name of the MongoDB collection where data operations—including insertion, querying, updating, or deletion—will be executed. This property defines the precise target collection within the database, establishing the scope and context of the data affected by the API call. The collection name must strictly adhere to MongoDB's naming conventions to ensure compatibility, prevent conflicts with system-reserved collections, and maintain database integrity. Proper naming facilitates efficient data organization, indexing, query performance, and overall database scalability.\n\n**Field behavior**\n- Identifies the specific MongoDB collection for all CRUD (Create, Read, Update, Delete) operations.\n- Directly determines which subset of data is accessed, modified, or deleted during the API request.\n- Must be a valid, non-empty UTF-8 string that complies with MongoDB collection naming rules.\n- Case-sensitive, meaning collections named \"Users\" and \"users\" are distinct and separate.\n- Influences the application’s data flow and operational context by specifying the data container.\n- Changes to this property dynamically alter the target dataset for the API operation.\n\n**Implementation guidance**\n- Validate that the collection name is a non-empty UTF-8 string without null characters, spaces, or invalid symbols such as '$' or '.' except where allowed.\n- Ensure the name does not start with the reserved prefix \"system.\" to avoid conflicts with MongoDB internal collections.\n- Adopt consistent, descriptive, and meaningful naming conventions to enhance database organization, maintainability, and clarity.\n- Verify the existence of the collection before performing operations; implement robust error handling for cases where the collection does not exist.\n- Consider how the collection name impacts indexing strategies, query optimization, sharding, and overall database performance.\n- Maintain consistency in collection naming across the application to prevent unexpected behavior or data access issues.\n- Avoid overly long or complex names to reduce potential errors and improve readability.\n\n**Examples**\n- \"users\"\n- \"orders\"\n- \"inventory_items\"\n- \"logs_2024\"\n- \"customer_feedback\"\n- \"session_data\"\n- \"product_catalog\"\n\n**Important notes**\n- Collection names are case-sensitive and must be used consistently throughout the application to avoid data inconsistencies.\n- Avoid reserved names and prefixes such as \"system.\" to prevent interference with MongoDB’s internal collections and operations.\n- The choice of collection name can significantly affect database performance, indexing efficiency, and scalability.\n- Renaming a collection requires updating all references in the application and may necessitate data"},"filter":{"type":"string","description":"A MongoDB query filter object used to specify detailed criteria for selecting documents within a collection. This filter defines the exact conditions that documents must satisfy to be included in the query results, leveraging MongoDB's comprehensive and expressive query syntax. It supports a wide range of operators and expressions to enable precise and flexible querying of documents based on field values, existence, data types, ranges, nested properties, array contents, and more. The filter allows for complex logical combinations and can target deeply nested fields using dot notation, making it a powerful tool for fine-grained data retrieval.\n\n**Field behavior**\n- Determines which documents in the MongoDB collection are matched and returned by the query.\n- Supports complex query expressions including comparison operators (`$eq`, `$gt`, `$lt`, `$gte`, `$lte`, `$ne`), logical operators (`$and`, `$or`, `$not`, `$nor`), element operators (`$exists`, `$type`), array operators (`$in`, `$nin`, `$all`, `$elemMatch`), and evaluation operators (`$regex`, `$expr`).\n- Enables filtering based on simple fields, nested document properties using dot notation, and array contents.\n- Supports querying for null values, missing fields, and specific data types.\n- Allows combining multiple conditions using logical operators to form complex queries.\n- If omitted or provided as an empty object, the query defaults to returning all documents in the collection without filtering.\n- Can be used to optimize data retrieval by narrowing down the result set to only relevant documents.\n\n**Implementation guidance**\n- Construct the filter using valid MongoDB query operators and syntax to ensure correct behavior.\n- Validate and sanitize all input used to build the filter to prevent injection attacks or malformed queries.\n- Utilize dot notation to query nested fields within embedded documents or arrays.\n- Optimize query performance by aligning filters with indexed fields in the MongoDB collection.\n- Handle edge cases such as null values, missing fields, and data type variations appropriately within the filter.\n- Test filters thoroughly to ensure they return expected results and handle all relevant data scenarios.\n- Consider the impact of filter complexity on query execution time and resource usage.\n- Use projection and indexing strategies in conjunction with filters to improve query efficiency.\n- Be mindful of MongoDB version compatibility when using advanced operators or expressions.\n\n**Examples**\n- `{ \"status\": \"active\" }` — selects documents where the `status` field exactly matches \"active\".\n- `{ \"age\": { \"$gte\":"},"document":{"type":"string","description":"The main content or data structure representing a single record in a MongoDB database, encapsulated as a JSON-like object known as a document. This document consists of key-value pairs where keys are field names and values can be of various data types, including nested documents, arrays, strings, numbers, booleans, dates, and binary data, allowing for flexible and hierarchical data representation. It serves as the fundamental unit of data storage and retrieval within MongoDB collections and typically includes a unique identifier field (_id) that ensures each document can be distinctly accessed. Documents can contain metadata, support complex nested structures, and vary in schema within the same collection, reflecting MongoDB's schema-less design. This flexibility enables dynamic and evolving data models suited for diverse application needs, supporting efficient querying, indexing, and atomic operations at the document level.\n\n**Field behavior**\n- Represents an individual MongoDB document stored within a collection.\n- Contains key-value pairs with field names as keys and diverse data types as values, including nested documents and arrays.\n- Acts as the primary unit for data storage, retrieval, and manipulation in MongoDB.\n- Includes a mandatory unique identifier field (_id) unless auto-generated by MongoDB.\n- Supports flexible schema, allowing documents within the same collection to have different structures.\n- Can include metadata fields and support indexing for optimized queries.\n- Allows for embedding related data directly within the document to reduce the need for joins.\n- Supports atomic operations at the document level, ensuring consistency during updates.\n\n**Implementation guidance**\n- Ensure compliance with MongoDB BSON format constraints for data types and structure.\n- Validate field names to exclude reserved characters such as '.' and '$' unless explicitly permitted.\n- Support serialization and deserialization processes between application-level objects and MongoDB documents.\n- Enable handling of nested documents and arrays to represent complex data models effectively.\n- Consider indexing frequently queried fields within the document to enhance performance.\n- Monitor document size to stay within MongoDB’s 16MB limit to avoid performance degradation.\n- Use appropriate data types to optimize storage and query efficiency.\n- Implement validation rules or schema enforcement at the application or database level if needed to maintain data integrity.\n\n**Examples**\n- { \"_id\": ObjectId(\"507f1f77bcf86cd799439011\"), \"name\": \"Alice\", \"age\": 30, \"address\": { \"street\": \"123 Main St\", \"city\": \"Metropolis\" } }\n- { \"productId\": \""},"update":{"type":"string","description":"Update operation details specifying the precise modifications to be applied to one or more documents within a MongoDB collection. This field defines the exact changes using MongoDB's rich set of update operators, enabling atomic and efficient modifications to fields, arrays, or embedded documents without replacing entire documents unless explicitly intended. It supports a comprehensive range of update operations such as setting new values, incrementing numeric fields, removing fields, appending or removing elements in arrays, renaming fields, and more, providing fine-grained control over document mutations. Additionally, it accommodates advanced update mechanisms including pipeline-style updates introduced in MongoDB 4.2+, allowing for complex transformations and conditional updates within a single operation. The update specification can also leverage array filters and positional operators to target specific elements within arrays, enhancing the precision and flexibility of updates.\n\n**Field behavior**\n- Specifies the criteria and detailed modifications to apply to matching documents in a collection.\n- Supports a wide array of MongoDB update operators like `$set`, `$unset`, `$inc`, `$push`, `$pull`, `$addToSet`, `$rename`, `$mul`, `$min`, `$max`, and others.\n- Can target single or multiple documents depending on the operation context and options such as `multi` or `upsert`.\n- Ensures atomicity of update operations to maintain data integrity and consistency across concurrent operations.\n- Allows both partial updates using operators and full document replacements when the update object is a complete document without operators.\n- Supports pipeline-style updates for complex, multi-stage transformations and conditional logic within updates.\n- Enables the use of array filters and positional operators to selectively update elements within arrays based on specified conditions.\n- Handles upsert behavior to insert new documents if no existing documents match the update criteria, when enabled.\n\n**Implementation guidance**\n- Validate the update object rigorously to ensure compliance with MongoDB update syntax and semantics, preventing runtime errors.\n- Favor using update operators for partial updates to optimize performance and minimize unintended data overwrites.\n- Implement graceful handling for cases where no documents match the update criteria, optionally supporting upsert behavior to insert new documents if none exist.\n- Explicitly control whether updates affect single or multiple documents to avoid accidental mass modifications.\n- Incorporate robust error handling for invalid update operations, schema conflicts, or violations of database constraints.\n- Consider the impact of update operations on indexes, triggers, and other database mechanisms to maintain overall system performance and data integrity.\n- Support advanced update features such as array filters, positional operators"},"upsert":{"type":"boolean","description":"Upsert is a boolean flag that controls whether a MongoDB update operation should insert a new document if no existing document matches the specified query criteria, or strictly update existing documents only. When set to true, the operation performs an atomic \"update or insert\" action: it updates the matching document if found, or inserts a new document constructed from the update criteria if none exists. This ensures that after the operation, a document matching the criteria will exist in the collection. When set to false, the operation only updates documents that already exist and skips the operation entirely if no match is found, preventing any new documents from being created. This flag is essential for scenarios where maintaining data integrity and avoiding unintended document creation is critical.\n\n**Field behavior**\n- Determines whether the update operation can create a new document when no existing document matches the query.\n- If true, performs an atomic operation that either updates an existing document or inserts a new one.\n- If false, restricts the operation to updating existing documents only, with no insertion.\n- Influences the database state by enabling conditional document creation during update operations.\n- Affects the outcome of update queries by potentially expanding the dataset with new documents.\n\n**Implementation guidance**\n- Use upsert=true when you need to guarantee the existence of a document after the operation, regardless of prior presence.\n- Use upsert=false to restrict changes strictly to existing documents, avoiding unintended document creation.\n- Ensure that when upsert is true, the update document includes all required fields to create a valid new document.\n- Validate that the query filter and update document are logically consistent to prevent unexpected or malformed insertions.\n- Consider the impact on database size, indexing, and performance when enabling upsert, as it may increase document count.\n- Test the behavior in your specific MongoDB driver and version to confirm upsert semantics and compatibility.\n- Be mindful of potential race conditions in concurrent environments that could lead to duplicate documents if not handled properly.\n\n**Examples**\n- `upsert: true` — Update the matching document if found; otherwise, insert a new document based on the update criteria.\n- `upsert: false` — Update only if a matching document exists; skip the operation if no match is found.\n- Using upsert to create a user profile document if it does not exist during an update operation.\n- Employing upsert in configuration management to ensure default settings documents are present and updated as needed.\n\n**Important notes**\n- Upsert operations can increase"},"ignoreExtract":{"type":"string","description":"Enter the path to the field in the source record that should be used to identify existing records. If a value is found for this field, then the source record will be considered an existing record.\n\nThe dynamic field list makes it easy for you to select the field. If a field contains special characters (which may be the case for certain APIs), then the field is enclosed with square brackets [ ], for example, [field-name].\n\n[*] indicates that the specified field is an array, for example, items[*].id. In this case, you should replace * with a number corresponding to the array item. The value for only this array item (not, the entire array) is checked.\n**CRITICAL:** JSON path to the field in the incoming/source record that should be used to determine if the record already exists in the target system.\n\n**This field is ONLY used when `ignoreExisting` is set to true.** Note: `ignoreExisting` is a separate property that is NOT part of this mongodb configuration.\n\nWhen `ignoreExisting: true` is set (at the import level), this field specifies which field from the incoming data should be checked for a value. If that field has a value, the record is considered to already exist and will be skipped.\n\n**When to set this field**\n\n**ALWAYS set this field** when:\n1. The `ignoreExisting` property will be set to `true` (for \"ignore existing\" scenarios)\n2. The user's prompt mentions checking a specific field to identify existing records\n3. The prompt includes phrases like:\n   - \"indicated by the [field] field\"\n   - \"matching the [field] field\"\n   - \"based on [field]\"\n   - \"using the [field] field\"\n   - \"if [field] is populated\"\n   - \"where [field] exists\"\n\n**How to determine the value**\n\n1. **Look for the identifier field in the prompt:**\n   - \"ignoring existing customers indicated by the **id field**\" → `ignoreExtract: \"id\"`\n   - \"skip existing vendors based on the **email field**\" → `ignoreExtract: \"email\"`\n   - \"ignore records where **customerId** is populated\" → `ignoreExtract: \"customerId\"`\n\n2. **Use the field from the incoming/source data** (NOT the target system field):\n   - This is the field path in the data coming INTO the import\n   - Should match a field that will be present in the source records\n\n3. **Apply special character handling:**\n   - If a field contains special characters (hyphens, spaces, etc.), enclose with square brackets: `[field-name]`, `[customer id]`\n   - For array notation, use `[*]` to indicate array items: `items[*].id` (replace `*` with index if needed)\n\n**Field path syntax**\n\n- **Simple field:** `id`, `email`, `customerId`\n- **Nested field:** `customer.id`, `billing.email`\n- **Field with special chars:** `[customer-id]`, `[external_id]`, `[Customer Name]`\n- **Array field:** `items[*].id` (or `items[0].id` for specific index)\n- **Nested in array:** `addresses[*].postalCode`\n\n**Examples**\n\nThese examples show ONLY the mongodb configuration (not the full import structure):\n\n**Example 1: Simple field**\nPrompt: \"Create customers while ignoring existing customers indicated by the id field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"customers\",\n  \"ignoreExtract\": \"id\"\n}\n```\n(Note: `ignoreExisting: true` should also be set, but at a different level - not shown here)\n\n**Example 2: Field with special characters**\nPrompt: \"Import vendors, skip existing ones based on the vendor-code field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"vendors\",\n  \"ignoreExtract\": \"[vendor-code]\"\n}\n```\n\n**Example 3: Nested field**\nPrompt: \"Add accounts, ignore existing where customer.email matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"accounts\",\n  \"ignoreExtract\": \"customer.email\"\n}\n```\n\n**Example 4: No ignoreExisting scenario**\nPrompt: \"Create or update all customer records\"\n```json\n{\n  \"method\": \"updateOne\",\n  \"collection\": \"customers\"\n}\n```\n(ignoreExtract should NOT be set when ignoreExisting is not true)\n\n**Important notes**\n\n- **DO NOT SET** this field if `ignoreExisting` is not true\n- **`ignoreExisting` is NOT part of this mongodb schema** - it belongs at a different level\n- This specifies the SOURCE field, not the target MongoDB field\n- The field path is case-sensitive\n- Must be a valid field that exists in the incoming data\n- Works in conjunction with `ignoreExisting` - both must be set for this feature to work\n- If the specified field has a value in the incoming record, that record is considered existing and will be skipped\n\n**Common patterns**\n\n- \"ignore existing [records] indicated by the **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"skip existing [records] based on **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"if **[field]** is populated\" → Set `ignoreExtract: \"[field]\"`\n- No mention of specific field for identification → May need to use default like \"id\" or \"_id\""},"ignoreLookupFilter":{"type":"string","description":"If you are adding documents to your MongoDB instance and you have the Ignore Existing flag set to true please enter a filter object here to find existing documents in this collection. The value of this field must be a valid JSON string describing a MongoDB filter object in the correct format and with the correct operators. Refer to the MongoDB documentation for the list of valid query operators and the correct filter object syntax.\n**CRITICAL:** A JSON-stringified MongoDB query filter used to search for existing documents in the target collection.\n\nIt defines the criteria to match incoming records against existing MongoDB documents. If the query returns a result, the record is considered \"existing\" and the operation (usually insert) is skipped.\n\n**Critical distinction vs `ignoreExtract`**\n- Use **`ignoreExtract`** when you just want to check if a field exists on the **INCOMING** record (no DB query).\n- Use **`ignoreLookupFilter`** when you need to query the **MONGODB DATABASE** to see if a record exists (e.g., \"check if email exists in DB\").\n\n**When to set this field**\nSet this field when:\n1. Top-level `ignoreExisting` is `true`.\n2. The prompt implies checking the database for duplicates (e.g., \"skip if email already exists\", \"prevent duplicate SKUs\", \"match on email\").\n\n**Format requirements**\n- Must be a valid **JSON string** representing a MongoDB query.\n- Use Handlebars `{{FieldName}}` to reference values from the incoming record.\n- Format: `\"{ \\\"mongoField\\\": \\\"{{incomingField}}\\\" }\"`\n\n**Examples**\n\n**Example 1: Match on a single field**\nPrompt: \"Insert users, skip if email already exists in the database\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"users\",\n  \"ignoreLookupFilter\": \"{\\\"email\\\":\\\"{{email}}\\\"}\"\n}\n```\n\n**Example 2: Match on multiple fields**\nPrompt: \"Add products, ignore if SKU and VerifyID match existing products\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"products\",\n  \"ignoreLookupFilter\": \"{\\\"sku\\\":\\\"{{sku}}\\\", \\\"verify_id\\\":\\\"{{verify_id}}\\\"}\"\n}\n```\n\n**Example 3: Using operators**\nPrompt: \"Import orders, ignore if order_id matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"orders\",\n  \"ignoreLookupFilter\": \"{\\\"order_id\\\": { \\\"$eq\\\": \\\"{{id}}\\\" } }\"\n}\n```\n\n**Important notes**\n- The value MUST be a string (stringify the JSON).\n- Mongo field names (keys) must match the **collection schema**.\n- Handlebars variables (values) must match the **incoming data**."}}},"NetSuite":{"type":"object","description":"Configuration for NetSuite exports","properties":{"lookups":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the category or classification of the lookup being referenced. It defines the nature or kind of data that the lookup represents, enabling the system to handle it appropriately based on its type.\n\n**Field behavior**\n- Determines the category or classification of the lookup.\n- Influences how the lookup data is processed and validated.\n- Helps in filtering or grouping lookups by their type.\n- Typically represented as a string or enumerated value indicating the lookup category.\n\n**Implementation guidance**\n- Define a clear set of allowed values or an enumeration for the type to ensure consistency.\n- Validate the type value against the predefined set during data input or API requests.\n- Use the type property to drive conditional logic or UI rendering related to the lookup.\n- Document all possible type values and their meanings for API consumers.\n\n**Examples**\n- \"country\" — representing a lookup of countries.\n- \"currency\" — representing a lookup of currency codes.\n- \"status\" — representing a lookup of status codes or states.\n- \"category\" — representing a lookup of item categories.\n\n**Important notes**\n- The type value should be consistent across the system to avoid ambiguity.\n- Changing the type of an existing lookup may affect dependent systems or data integrity.\n- The type property is essential for distinguishing between different lookup datasets.\n\n**Dependency chain**\n- May depend on or be referenced by other properties that require lookup data.\n- Used in conjunction with lookup values or keys to provide meaningful data.\n- Can influence validation rules or business logic applied to the lookup.\n\n**Technical details**\n- Typically implemented as a string or an enumerated type in the API schema.\n- Should be indexed or optimized for quick filtering and retrieval.\n- May be case-sensitive or case-insensitive depending on system design.\n- Should be included in API documentation with all valid values listed."},"searchField":{"type":"string","description":"searchField specifies the particular field or attribute within a dataset or index that the search operation should target. This property determines which field the search query will be applied to, enabling focused and efficient retrieval of relevant information.\n\n**Field behavior**\n- Defines the specific field in the data source to be searched.\n- Limits the search scope to the designated field, improving search precision.\n- Can accept field names as strings, corresponding to indexed or searchable attributes.\n- If omitted or null, the search may default to a predefined field or perform a broad search across multiple fields.\n\n**Implementation guidance**\n- Ensure the field name provided matches exactly with the field names defined in the data schema or index.\n- Validate that the field is searchable and supports the type of queries intended.\n- Consider supporting nested or compound field names if the data structure is hierarchical.\n- Provide clear error handling for invalid or unsupported field names.\n- Allow for case sensitivity or insensitivity based on the underlying search engine capabilities.\n\n**Examples**\n- \"title\" — to search within the title field of documents.\n- \"author.name\" — to search within a nested author name field.\n- \"description\" — to search within the description or summary field.\n- \"tags\" — to search within a list or array of tags associated with items.\n\n**Important notes**\n- The effectiveness of the search depends on the indexing and search capabilities of the specified field.\n- Using an incorrect or non-existent field name may result in no search results or errors.\n- Some fields may require special handling, such as tokenization or normalization, to support effective searching.\n- The property is case-sensitive if the underlying system treats field names as such.\n\n**Dependency chain**\n- Dependent on the data schema or index configuration defining searchable fields.\n- May interact with other search parameters such as search query, filters, or sorting.\n- Influences the search engine or database query construction.\n\n**Technical details**\n- Typically represented as a string matching the field identifier in the data source.\n- May support dot notation for nested fields (e.g., \"user.address.city\").\n- Should be sanitized to prevent injection attacks or malformed queries.\n- Used by the search backend to construct field-specific query clauses."},"expression":{"type":"string","description":"Expression used to define the criteria or logic for the lookup operation. This expression typically consists of a string that can include variables, operators, functions, or references to other data elements, enabling dynamic and flexible lookup conditions.\n\n**Field behavior**\n- Specifies the condition or formula that determines how the lookup is performed.\n- Can include variables, constants, operators, and functions to build complex expressions.\n- Evaluated at runtime to filter or select data based on the defined logic.\n- Supports dynamic referencing of other fields or parameters within the lookup context.\n\n**Implementation guidance**\n- Ensure the expression syntax is consistent with the supported expression language or parser.\n- Validate expressions before execution to prevent runtime errors.\n- Support common operators (e.g., ==, !=, >, <, AND, OR) and functions as per the system capabilities.\n- Allow expressions to reference other properties or variables within the lookup scope.\n- Provide clear error messages if the expression is invalid or cannot be evaluated.\n\n**Examples**\n- `\"status == 'active' AND score > 75\"`\n- `\"userRole == 'admin' OR accessLevel >= 5\"`\n- `\"contains(tags, 'urgent')\"`\n- `\"date >= '2024-01-01' AND date <= '2024-12-31'\"`\n\n**Important notes**\n- The expression must be a valid string conforming to the expected syntax.\n- Incorrect or malformed expressions may cause lookup failures or unexpected results.\n- Expressions should be designed to optimize performance, avoiding overly complex or resource-intensive logic.\n- The evaluation context (available variables and functions) should be clearly documented for users.\n\n**Dependency chain**\n- Depends on the lookup context providing necessary variables or data references.\n- May depend on the expression evaluation engine or parser integrated into the system.\n- Influences the outcome of the lookup operation and subsequent data processing.\n\n**Technical details**\n- Typically represented as a string data type.\n- Parsed and evaluated by an expression engine or interpreter at runtime.\n- May support standard expression languages such as SQL-like syntax, JSONPath, or custom domain-specific languages.\n- Should handle escaping and quoting of string literals within the expression."},"resultField":{"type":"string","description":"resultField specifies the name of the field in the lookup result that contains the desired value to be retrieved or used.\n\n**Field behavior**\n- Identifies the specific field within the lookup result data to extract.\n- Determines which piece of information from the lookup response is returned or processed.\n- Must correspond to a valid field name present in the lookup result structure.\n- Used to map or transform lookup data into the expected output format.\n\n**Implementation guidance**\n- Ensure the field name matches exactly with the field in the lookup result, including case sensitivity if applicable.\n- Validate that the specified field exists in all possible lookup responses to avoid runtime errors.\n- Use this property to customize which data from the lookup is utilized downstream.\n- Consider supporting nested field paths if the lookup result is a complex object.\n\n**Examples**\n- \"email\" — to extract the email address field from the lookup result.\n- \"userId\" — to retrieve the user identifier from the lookup data.\n- \"address.city\" — to access a nested city field within an address object in the lookup result.\n\n**Important notes**\n- Incorrect or misspelled field names will result in missing or null values.\n- The field must be present in the lookup response schema; otherwise, the lookup will not yield useful data.\n- This property does not perform any transformation; it only selects the field to be used.\n\n**Dependency chain**\n- Depends on the structure and schema of the lookup result data.\n- Used in conjunction with the lookup operation that fetches the data.\n- May influence downstream processing or mapping logic that consumes the lookup output.\n\n**Technical details**\n- Typically a string representing the key or path to the desired field in the lookup result object.\n- May support dot notation for nested fields if supported by the implementation.\n- Should be validated against the lookup result schema during configuration or runtime."},"includeInactive":{"type":"boolean","description":"IncludeInactive: Specifies whether to include inactive items in the lookup results.\n**Field behavior**\n- When set to true, the lookup results will contain both active and inactive items.\n- When set to false or omitted, only active items will be included in the results.\n- Affects the filtering logic applied during data retrieval for lookups.\n**Implementation guidance**\n- Default the value to false if not explicitly provided to avoid unintended inclusion of inactive items.\n- Ensure that the data source supports filtering by active/inactive status.\n- Validate the input to accept only boolean values.\n- Consider performance implications when including inactive items, especially if the dataset is large.\n**Examples**\n- includeInactive: true — returns all items regardless of their active status.\n- includeInactive: false — returns only items marked as active.\n**Important notes**\n- Including inactive items may expose deprecated or obsolete data; use with caution.\n- The definition of \"inactive\" depends on the underlying data model and should be clearly documented.\n- This property is typically used in administrative or audit contexts where full data visibility is required.\n**Dependency chain**\n- Dependent on the data source’s ability to distinguish active vs. inactive items.\n- May interact with other filtering or pagination parameters in the lookup API.\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or request body field in lookup API calls.\n- Requires backend support to filter data based on active status flags or timestamps."},"_id":{"type":"object","description":"_id: The unique identifier for the lookup entry within the system. This identifier is typically a string or an ObjectId that uniquely distinguishes each lookup record from others in the database or dataset.\n**Field behavior**\n- Serves as the primary key for the lookup entry.\n- Must be unique across all lookup entries.\n- Immutable once assigned; should not be changed to maintain data integrity.\n- Used to reference the lookup entry in queries, updates, and deletions.\n**Implementation guidance**\n- Use a consistent format for the identifier, such as a UUID or database-generated ObjectId.\n- Ensure uniqueness by leveraging database constraints or application-level checks.\n- Assign the _id at the time of creation of the lookup entry.\n- Avoid exposing internal _id values directly to end-users unless necessary.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"lookup_001\"\n**Important notes**\n- The _id field is critical for data integrity and should be handled with care.\n- Changing the _id after creation can lead to broken references and data inconsistencies.\n- In some systems, the _id may be automatically generated by the database.\n**Dependency chain**\n- May be referenced by other fields or entities that link to the lookup entry.\n- Dependent on the database or storage system's method of generating unique identifiers.\n**Technical details**\n- Typically stored as a string or a specialized ObjectId type depending on the database.\n- Indexed to optimize query performance.\n- Often immutable and enforced by database schema constraints."},"default":{"type":["object","null"],"description":"A boolean property indicating whether the lookup is the default selection among multiple lookup options.\n\n**Field behavior**\n- Determines if this lookup is automatically selected or used by default in the absence of user input.\n- Only one lookup should be marked as default within a given context to avoid ambiguity.\n- Influences the initial state or value presented to the user or system when multiple lookups are available.\n\n**Implementation guidance**\n- Set to `true` for the lookup that should be the default choice; all others should be `false` or omitted.\n- Validate that only one lookup per group or context has `default` set to `true`.\n- Use this property to pre-populate fields or guide user selections in UI or processing logic.\n\n**Examples**\n- `default: true` — This lookup is the default selection.\n- `default: false` — This lookup is not the default.\n- Omitted `default` property implies the lookup is not the default.\n\n**Important notes**\n- If multiple lookups have `default` set to `true`, the system behavior may be unpredictable.\n- The absence of a `default` lookup means no automatic selection; user input or additional logic is required.\n- This property is typically optional but recommended for clarity in multi-lookup scenarios.\n\n**Dependency chain**\n- Related to the `lookups` array or collection where multiple lookup options exist.\n- May influence UI components or backend logic that rely on default selections.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default value: Typically `false` if omitted.\n- Should be included at the same level as other lookup properties within the `lookups` object."}},"description":"A comprehensive collection of lookup tables or reference data sets designed to support and enhance the main data model or application logic. These lookup tables provide predefined, authoritative sets of values that can be consistently referenced throughout the system to ensure data uniformity, minimize redundancy, and facilitate robust data validation and user interface consistency. They serve as centralized repositories for static or infrequently changing data that underpin dynamic application behavior, such as dropdown options, status codes, categorizations, and mappings. By centralizing this reference data, the system promotes maintainability, reduces errors, and enables seamless integration across different modules and services. Lookup tables may also support complex data structures, localization, versioning, and hierarchical relationships to accommodate diverse application requirements and evolving business rules.\n\n**Field behavior**\n- Contains multiple distinct lookup tables, each identified by a unique name representing a specific domain or category of reference data.\n- Each lookup table comprises structured entries, which may be simple key-value pairs or complex objects with multiple attributes, defining valid options, mappings, or metadata.\n- Lookup tables standardize inputs, support UI components like dropdowns and filters, enforce data integrity, and drive conditional logic across the application.\n- Typically static or updated infrequently, ensuring stability and consistency in dependent processes.\n- Supports localization or internationalization to provide values in multiple languages or regional formats.\n- May represent hierarchical or relational data structures within lookup tables to capture complex relationships.\n- Enables versioning to track changes over time and facilitate rollback, auditing, or backward compatibility.\n- Acts as a single source of truth for reference data, reducing duplication and discrepancies across systems.\n- Can be extended or customized to meet specific domain or organizational needs without impacting core application logic.\n\n**Implementation guidance**\n- Organize as a dictionary or map where each key corresponds to a lookup table name and the associated value contains the set of entries.\n- Implement efficient loading, caching, and retrieval mechanisms to optimize performance and minimize latency.\n- Provide controlled update or refresh capabilities to allow seamless modifications without disrupting ongoing application operations.\n- Enforce strict validation rules on lookup entries to prevent invalid, inconsistent, or duplicate data.\n- Incorporate support for localization and internationalization where applicable, enabling multi-language or region-specific values.\n- Maintain versioning or metadata to track changes and support backward compatibility.\n- Consider access control policies if lookup data includes sensitive or restricted information.\n- Design APIs or interfaces for easy querying, filtering, and retrieval of lookup data by client applications.\n- Ensure synchronization mechanisms are in place"},"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"Specifies the type of operation to perform on NetSuite records. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create a new record. Use when importing new records only.\n- \"update\" — Update an existing record. Requires internalIdLookup to find the record.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. This is the most common operation.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete a record. Requires internalIdLookup to find the record.\n\n**Implementation guidance**\n- Value is automatically lowercased by the API.\n- For \"addupdate\", \"update\", and \"delete\", you MUST also set internalIdLookup to specify how to find existing records.\n- For \"add\" with ignoreExisting, set internalIdLookup to check for duplicates before creating.\n- Default to \"addupdate\" when the user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when the user says \"create\" or \"insert\" without mentioning updates.\n\n**Examples**\n- \"addupdate\" — for syncing records (create or update)\n- \"add\" — for creating new records only\n- \"update\" — for updating existing records only\n- \"delete\" — for removing records\n\n**Important notes**\n- This field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\n- Do NOT wrap it in an object like {\"type\": \"addupdate\"} — that will cause a validation error."},"isFileProvider":{"type":"boolean","description":"isFileProvider indicates whether the NetSuite integration functions as a file provider, enabling comprehensive capabilities to access, manage, and manipulate files within the NetSuite environment. When enabled, this property grants the integration full file-related operations such as browsing directories, uploading new files, downloading existing files, updating file metadata, and deleting files directly through the integration interface. This functionality is essential for workflows that require seamless, automated interaction with NetSuite's file storage system, facilitating efficient document handling, version control, and integration with external file management processes. It also impacts the availability of specific API endpoints and user interface components related to file management, and may influence system performance and API rate limits due to the volume of file operations.\n\n**Field behavior**\n- When set to true, activates full file management features including browsing, uploading, downloading, updating, and deleting files within NetSuite.\n- When false or omitted, disables all file provider functionalities, preventing any file-related operations through the integration.\n- Acts as a toggle controlling the availability of file-centric capabilities and related API endpoints.\n- Changes typically require re-authentication or reinitialization to apply updated file access permissions correctly.\n- May affect integration performance and API rate limits due to potentially high file operation volumes.\n\n**Implementation guidance**\n- Enable only if direct, comprehensive interaction with NetSuite files is required.\n- Ensure the integration has appropriate NetSuite permissions, roles, and OAuth scopes for secure file access and manipulation.\n- Validate strictly as a boolean to prevent misconfiguration.\n- Assess security implications thoroughly, enforcing strict access controls and audit logging when enabled.\n- Monitor API usage and system performance closely, as file operations can increase load and impact rate limits.\n- Coordinate with NetSuite administrators to align file access policies with organizational security and compliance standards.\n- Consider effects on middleware and downstream systems handling file transfers, error handling, and synchronization.\n\n**Examples**\n- `isFileProvider: true` — Enables full file provider capabilities, allowing browsing, uploading, downloading, and managing files within NetSuite.\n- `isFileProvider: false` — Disables all file-related functionalities, restricting the integration to non-file operations only.\n\n**Important notes**\n- Enabling file provider functionality may require additional NetSuite OAuth scopes, roles, or permissions specifically related to file access.\n- File operations can significantly impact API rate limits and quotas; plan and monitor usage accordingly.\n- This property exclusively controls file management capabilities and does not affect other integration functionalities."},"customFieldMetadata":{"type":"object","description":"Metadata information related to custom fields defined within the NetSuite environment, offering a comprehensive and detailed overview of each custom field's attributes, configuration settings, and operational parameters. This includes critical details such as the field type (e.g., checkbox, select, date, text), label, internal ID, source lists or records for select-type fields, default values, validation rules, display properties, and behavioral flags like mandatory, read-only, or hidden status. The metadata acts as an authoritative reference for accurately managing, validating, and utilizing custom fields during integrations, data processing, and API interactions, ensuring that custom field data is interpreted and handled precisely according to its defined structure, constraints, and business logic. It also enables dynamic adaptation to changes in custom field configurations, supporting robust and flexible integration workflows that maintain data integrity and enforce business rules effectively.\n\n**Field behavior**\n- Provides exhaustive metadata for each custom field, including type, label, internal ID, source references, default values, validation criteria, and display properties.\n- Reflects the current and authoritative configuration and constraints of custom fields within the NetSuite account.\n- Enables dynamic validation, processing, and rendering of custom fields during data import, export, and API operations.\n- Supports understanding of field dependencies, mandatory status, read-only or hidden flags, and other behavioral characteristics.\n- Facilitates enforcement of business rules and data integrity through validation and conditional logic based on metadata.\n\n**Implementation guidance**\n- Ensure this property is consistently populated with accurate, up-to-date, and comprehensive metadata for all relevant custom fields in the integration context.\n- Regularly synchronize and refresh metadata with the NetSuite environment to capture any additions, modifications, or deletions of custom fields.\n- Leverage this metadata to validate data payloads before transmission and to correctly interpret and process received data.\n- Use metadata details to implement field-specific logic, such as enforcing mandatory fields, applying validation rules, handling default values, and respecting read-only or hidden statuses.\n- Consider caching metadata where appropriate to optimize performance, while ensuring mechanisms exist to detect and apply updates promptly.\n\n**Examples**\n- Field type: \"checkbox\", label: \"Is Active\", id: \"custentity_is_active\", mandatory: true\n- Field type: \"select\", label: \"Customer Type\", id: \"custentity_customer_type\", sourceList: \"customerTypes\", readOnly: false\n- Field type: \"date\", label: \"Contract Start Date\", id: \"custentity_contract_start"},"recordType":{"type":"string","description":"recordType specifies the exact type of record within the NetSuite system that the API operation will interact with. This property is critical as it defines the schema, fields, validation rules, workflows, and behaviors applicable to the record being accessed, created, updated, or deleted. By accurately specifying the recordType, the API can correctly interpret the structure of the data payload, enforce appropriate business logic, and route the request to the correct processing module within NetSuite. This ensures precise data manipulation, retrieval, and integration aligned with the intended record context. Proper use of recordType enables seamless interaction with both standard and custom NetSuite records, supporting robust and flexible API operations across diverse business scenarios.\n\n**Field behavior**\n- Defines the specific NetSuite record type for the API request (e.g., customer, salesOrder, invoice).\n- Directly influences validation rules, available fields, permissible operations, and workflows in the request or response.\n- Determines the context, permissions, and business logic applicable to the record.\n- Must be set accurately and consistently to ensure correct processing and avoid errors.\n- Affects how the API interprets, validates, and processes the associated data payload.\n- Supports interaction with both standard and custom record types configured in the NetSuite account.\n- Enables the API to dynamically adapt to different record structures and business processes based on the recordType.\n\n**Implementation guidance**\n- Use the exact NetSuite internal record type identifiers as defined in official NetSuite documentation and account-specific customizations.\n- Validate the recordType value against the list of supported record types before processing the request to prevent errors.\n- Ensure compatibility of the recordType with the specific API endpoint, operation, and NetSuite account configuration.\n- Implement robust error handling for missing, invalid, or unsupported recordType values, providing clear and actionable feedback.\n- Regularly update the recordType list to reflect changes in NetSuite releases, custom record definitions, and account-specific configurations.\n- Consider role-based access and feature enablement that may affect the availability or behavior of certain record types.\n- When dealing with custom records, verify that the recordType matches the custom record’s internal ID and that the API user has appropriate permissions.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"itemFulfillment\"\n- \"vendorBill\"\n- \"purchaseOrder\"\n- \"customRecordType123\" (example of a custom record)\n\n**Important notes**\n- The recordType value is case"},"recordTypeId":{"type":"string","description":"recordTypeId is the unique identifier that specifies the exact type or category of a record within the NetSuite system. It defines the structure, fields, validation rules, workflows, and behavior applicable to the record instance, ensuring that API operations target the correct record schema and processing logic. This identifier is essential for routing requests to the appropriate record handlers, enforcing data integrity, and validating the data according to the specific record type's requirements. Once set for a record instance, the recordTypeId is immutable, as changing it would imply a fundamentally different record type and could lead to inconsistencies or errors in processing.\n\n**Field behavior**\n- Specifies the precise category or type of NetSuite record being accessed or manipulated.\n- Determines the applicable schema, fields, validation rules, workflows, and business logic for the record.\n- Directs API requests to the correct processing logic, endpoint, and validation routines based on record type.\n- Immutable for a given record instance; changing it indicates a different record type and is not permitted.\n- Influences permissions, access control, and integration behaviors tied to the record type.\n- Affects how related records and dependencies are handled within the system.\n\n**Implementation guidance**\n- Use the exact internal string ID or numeric identifier as defined by NetSuite for record types.\n- Validate the recordTypeId against the current list of supported, enabled, and custom NetSuite record types before processing.\n- Ensure consistency and compatibility between recordTypeId and related fields or data structures within the payload.\n- Implement robust error handling for invalid, unsupported, deprecated, or mismatched recordTypeId values.\n- Keep the recordTypeId definitions synchronized with updates, customizations, and configurations from the NetSuite account to avoid mismatches.\n- Consider case sensitivity and formatting rules as dictated by the NetSuite API specifications.\n- When integrating with custom record types, verify that custom IDs conform to naming conventions and are properly registered in the NetSuite environment.\n\n**Examples**\n- \"customer\" — representing a standard customer record type.\n- \"salesOrder\" — representing a sales order record type.\n- \"invoice\" — representing an invoice record type.\n- Numeric identifiers such as \"123\" if the system uses numeric codes for record types.\n- Custom record type IDs defined within a specific NetSuite account, e.g., \"customRecordType_456\".\n- \"employee\" — representing an employee record type.\n- \"purchaseOrder\" — representing a purchase order record type.\n\n**Important notes**"},"retryUpdateAsAdd":{"type":"boolean","description":"A boolean flag that controls whether failed update operations in NetSuite integrations should be automatically retried as add operations. When set to true, if an update attempt fails—commonly because the target record does not exist—the system will attempt to add the record instead. This mechanism helps maintain data synchronization by addressing scenarios where records may have been deleted, never created, or are otherwise missing, thereby reducing manual intervention and minimizing data discrepancies. It ensures smoother integration workflows by providing a fallback strategy specifically for update failures, enhancing resilience and data consistency.\n\n**Field behavior**\n- Governs retry logic exclusively for update operations within NetSuite integrations.\n- When true, triggers an automatic retry as an add operation upon update failure.\n- When false or unset, update failures result in immediate error responses without retries.\n- Facilitates handling of missing or deleted records to improve synchronization accuracy.\n- Does not influence initial add operations or other API call types.\n- Activates retry logic only after detecting a failed update attempt.\n\n**Implementation guidance**\n- Enable this flag when there is a possibility that update requests target non-existent records.\n- Evaluate the potential for duplicate record creation if the add operation is not idempotent.\n- Implement comprehensive error handling and logging to monitor retry attempts and outcomes.\n- Conduct thorough testing in staging or development environments before deploying to production.\n- Coordinate with other retry or error recovery configurations to avoid conflicting behaviors.\n- Ensure accurate detection of update failure causes to apply retries appropriately and avoid unintended consequences.\n\n**Examples**\n- `retryUpdateAsAdd: true` — Automatically retries adding the record if an update fails due to a missing record.\n- `retryUpdateAsAdd: false` — Update failures are reported immediately without retrying as add operations.\n\n**Important notes**\n- Enabling this flag may lead to duplicate records if update failures occur for reasons other than missing records.\n- Use with caution in environments requiring strict data integrity, auditability, and compliance.\n- This flag only modifies retry behavior after an update failure; it does not alter the initial add or update request logic.\n- Accurate failure cause analysis is critical to ensure retries are applied correctly and safely.\n\n**Dependency chain**\n- Relies on execution of update operations against NetSuite records.\n- Works in conjunction with error detection and handling mechanisms to identify failure reasons.\n- Commonly used alongside other retry policies or error recovery settings to enhance integration robustness.\n\n**Technical details**\n- Data type: boolean\n- Default value: false"},"batchSize":{"type":"number","description":"Specifies the number of records to process in a single batch operation when interacting with the NetSuite API, directly influencing the efficiency, performance, and resource utilization of data processing tasks. This parameter controls how many records are sent or retrieved in one API call, balancing throughput with system constraints such as memory usage, network latency, and API rate limits. Proper configuration of batchSize is essential to optimize processing speed while minimizing the risk of errors, timeouts, or throttling by the NetSuite API. Adjusting batchSize allows for fine-tuning of integration workflows to accommodate varying workloads, system capabilities, and API restrictions, ensuring reliable and efficient data synchronization.  \n**Field behavior:**  \n- Determines the count of records included in each batch request to the NetSuite API.  \n- Directly impacts the number of API calls required to complete processing of all records.  \n- Larger batch sizes can improve throughput by reducing the number of calls but may increase memory consumption, processing latency, and risk of timeouts per call.  \n- Smaller batch sizes reduce memory footprint and improve responsiveness but may increase the total number of API calls and associated overhead.  \n- Influences error handling complexity, as failures in larger batches may require more extensive retries or partial processing logic.  \n- Affects how quickly data is processed and synchronized between systems.  \n**Implementation** GUIDANCE:  \n- Select a batch size that balances optimal performance with available system resources, network conditions, and API constraints.  \n- Conduct thorough performance testing with varying batch sizes to identify the most efficient configuration for your specific workload and environment.  \n- Ensure the batch size complies with NetSuite API limits, including maximum records per request and rate limiting policies.  \n- Implement robust error handling to manage partial failures within batches, including retry mechanisms, logging, and potential batch splitting.  \n- Consider the complexity, size, and processing time of individual records when determining batch size, as more complex or larger records may require smaller batches.  \n- Monitor runtime metrics and adjust batchSize dynamically if possible, based on system load, API response times, and error rates.  \n**Examples:**  \n- 100 (process 100 records per batch for balanced throughput and resource use)  \n- 500 (process 500 records per batch to maximize throughput in high-capacity environments)  \n- 50 (smaller batch size suitable for environments with limited memory, stricter API limits, or higher error sensitivity)  \n**Important notes:**  \n- The batchSize setting"},"internalIdLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Extract the relevant internal ID from the provided NetSuite data response to uniquely identify and reference specific records for subsequent API operations. This involves accurately parsing the response data—whether in JSON, XML, or other formats—to isolate the internal ID, which serves as a critical key for actions such as updates, deletions, or detailed queries within NetSuite. The extraction process must be precise, resilient to variations in response structures, and capable of handling nested or complex data formats to ensure reliable downstream processing and prevent errors in record manipulation.  \n**Field behavior:**  \n- Extracts a unique internal ID value from NetSuite API responses or data objects.  \n- Acts as the primary identifier for referencing and manipulating NetSuite records.  \n- Typically invoked after lookup, search, or query operations to obtain the necessary ID for further API calls.  \n- Supports handling of multiple data formats including JSON, XML, and potentially others.  \n- Ensures the extracted ID is valid, correctly formatted, and usable for subsequent operations.  \n- Handles nested or complex response structures to reliably locate the internal ID.  \n- Operates as a read-only field derived exclusively from API response data.  \n\n**Implementation guidance:**  \n- Implement robust and flexible parsing logic tailored to the expected NetSuite response formats.  \n- Validate the extracted internal ID against expected patterns (e.g., numeric strings) to ensure correctness.  \n- Incorporate comprehensive error handling for cases where the internal ID is missing, malformed, or unexpected.  \n- Design the extraction mechanism to be configurable and adaptable to changes in NetSuite API response schemas or record types.  \n- Support asynchronous processing if API responses are received asynchronously or in streaming formats.  \n- Log extraction attempts and failures to facilitate debugging and maintenance.  \n- Regularly update extraction logic to accommodate API version changes or schema updates.  \n\n**Examples:**  \n- Extracting `\"internalId\": \"12345\"` from a JSON response such as `{ \"internalId\": \"12345\", \"name\": \"Sample Record\" }`.  \n- Parsing nested XML elements or attributes to retrieve the internalId value, e.g., `<record internalId=\"12345\">`.  \n- Using the extracted internal ID to perform update, delete, or detailed fetch operations on a NetSuite record.  \n- Handling cases where the internal ID is embedded within deeply nested or complex response objects.  \n\n**Important notes:**  \n- The internal ID uniquely identifies records within NetSuite and is essential for accurate and reliable API"},"searchField":{"type":"string","description":"The specific field within a NetSuite record that serves as the key attribute for performing an internal ID lookup search. This field determines which property of the record will be queried to locate and retrieve the corresponding internal ID, thereby directly influencing the precision, relevance, and efficiency of the lookup operation. Selecting an appropriate searchField is critical for ensuring accurate matches and optimal performance, as it typically should be a unique, indexed, or otherwise optimized attribute within the record schema. The choice of searchField impacts not only the speed of the search but also the uniqueness and reliability of the results returned, making it essential to align with the record type, user permissions, and any customizations present in the NetSuite environment. Proper selection of this field helps avoid ambiguous results and enhances the overall robustness of the lookup process.\n\n**Field behavior**\n- Identifies the exact attribute of the NetSuite record to be searched.\n- Directs the lookup process to compare the provided search value against this field.\n- Affects the accuracy, relevance, and uniqueness of the internal ID retrieved.\n- Commonly corresponds to fields that are indexed or uniquely constrained for efficient querying.\n- Determines the scope and filtering of the search within the record type.\n- Influences how the system handles multiple matches or ambiguous results.\n- Must correspond to a searchable field that the user has permission to access.\n\n**Implementation guidance**\n- Use only valid, supported field names as defined in the NetSuite record schema.\n- Prefer fields that are indexed or optimized for search to enhance lookup speed.\n- Verify the existence and searchability of the field on the specific record type being queried.\n- Choose fields that minimize duplicates to avoid ambiguous or multiple results.\n- Ensure exact case sensitivity and spelling alignment with NetSuite’s API definitions.\n- Test the field’s searchability considering user permissions, record configurations, and any customizations.\n- Avoid fields that are write-only, restricted, or non-searchable due to NetSuite permissions or settings.\n- Consider the impact of custom fields or scripts that may alter standard field behavior.\n- Validate that the field supports the type of search operation intended (e.g., exact match, partial match).\n\n**Examples**\n- \"externalId\" — to locate records by an externally assigned identifier.\n- \"email\" — to find records using an email address attribute.\n- \"name\" — to search by the record’s name or title field.\n- \"tranId\" — to query transaction records by their transaction ID.\n- \"entityId"},"expression":{"type":"string","description":"A string representing a search expression used to filter or query records within the NetSuite system based on specific criteria. This expression defines the precise conditions that records must meet to be included in the search results, enabling targeted and efficient data retrieval. It supports a rich syntax including logical operators (AND, OR, NOT), comparison operators (=, !=, >, <, >=, <=), field references, and literal values. Expressions can be nested to form complex queries tailored to specific business needs, allowing for granular control over the search logic. This property is essential for performing internal lookups by matching records against the defined criteria, ensuring that only relevant records are returned.\n\n**Field behavior**\n- Defines the criteria for searching or filtering records by specifying one or more conditions.\n- Supports logical operators such as AND, OR, and NOT to combine multiple conditions logically.\n- Allows inclusion of field names, comparison operators, and literal values to construct meaningful expressions.\n- Enables nested expressions for advanced querying capabilities.\n- Used internally to perform lookups by matching records against the defined expression.\n- Returns records that satisfy all specified conditions within the expression.\n- Evaluates expressions dynamically at runtime to reflect current data states.\n- Supports referencing related record fields to enable cross-record filtering.\n\n**Implementation guidance**\n- Ensure the expression syntax strictly conforms to NetSuite’s search expression format and supported operators.\n- Validate the expression prior to execution to catch syntax errors and prevent runtime failures.\n- Use parameterized values or safe input handling to mitigate injection risks and enhance security.\n- Support and correctly parse nested expressions to handle complex query scenarios.\n- Implement graceful handling for cases where the expression yields no matching records, avoiding errors.\n- Optimize expressions for performance by limiting complexity and avoiding overly broad criteria.\n- Provide clear error messages or feedback when expressions are invalid or unsupported.\n- Consider caching frequently used expressions to improve lookup efficiency.\n\n**Examples**\n- `\"status = 'Open' AND type = 'Sales Order'\"`\n- `\"createdDate >= '2023-01-01' AND createdDate <= '2023-12-31'\"`\n- `\"customer.internalId = '12345' OR customer.email = 'example@example.com'\"`\n- `\"NOT (status = 'Closed' OR status = 'Cancelled')\"`\n- `\"amount > 1000 AND (priority = 'High' OR priority = 'Urgent')\"`\n- `\"item.category = 'Electronics' AND quantity >= 10\"`\n- `\"lastModifiedDate >"}},"description":"internalIdLookup is a boolean property that specifies whether the lookup operation should utilize the unique internal ID assigned by NetSuite to an entity for record retrieval. When set to true, the system treats the provided identifier strictly as this immutable internal ID, enabling precise and unambiguous access to the exact record. This approach is essential in scenarios demanding high accuracy and reliability, as internal IDs are guaranteed to be unique within the NetSuite environment and remain consistent over time. If set to false or omitted, the lookup defaults to using external identifiers, display names, or other non-internal keys, which may be less precise and could lead to multiple matches or ambiguity in the results.\n\n**Field behavior**\n- Activates strict matching using NetSuite’s unique internal ID when true.\n- Defaults to external or display identifier matching when false or not specified.\n- Changes the lookup logic and matching criteria based on the flag’s value.\n- Ensures efficient, accurate, and unambiguous record retrieval by leveraging stable internal identifiers.\n\n**Implementation guidance**\n- Enable only when the exact internal ID of the target record is known, verified, and appropriate for the lookup.\n- Avoid setting to true if only external or display identifiers are available to prevent failed or incorrect lookups.\n- Confirm that the internal ID corresponds to the correct record type to avoid mismatches or errors.\n- Validate the format and existence of the internal ID before performing the lookup operation.\n- Implement comprehensive error handling to manage cases where the internal ID is invalid, missing, or does not correspond to any record.\n\n**Examples**\n- `internalIdLookup: true` — Retrieve a customer record by its NetSuite internal ID for precise identification.\n- `internalIdLookup: false` — Search for an inventory item using its external SKU or descriptive name.\n- Omitting `internalIdLookup` defaults to false, causing the system to perform lookups based on external or display identifiers.\n\n**Important notes**\n- Internal IDs are stable, unique, and system-generated identifiers within NetSuite, providing the most reliable reference for records.\n- Using internal IDs can improve lookup performance and reduce ambiguity compared to relying on external or display identifiers.\n- Incorrectly setting this flag to true with non-internal IDs will likely result in lookup failures or no matching records found.\n- This property is specific to NetSuite integrations and may not be applicable or recognized in other systems or contexts.\n- Ensure synchronization between the internal ID used and the expected record type to maintain data integrity and"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"ignoreReadOnlyFields specifies whether the system should bypass any attempts to modify fields designated as read-only during data processing or update operations. When set to true, the system silently ignores changes to these protected fields, preventing errors or exceptions that would otherwise occur if such modifications were attempted. This behavior facilitates smoother integration and more resilient data handling, especially in environments where partial updates are common or where read-only constraints are strictly enforced. Conversely, when set to false, the system will attempt to update all fields, including those marked as read-only, which may result in errors or exceptions if such modifications are not permitted.  \n**Field behavior:**  \n- Controls whether read-only fields are excluded from update or modification attempts.  \n- When true, modifications to read-only fields are skipped silently without raising errors or exceptions.  \n- When false, attempts to modify read-only fields may cause errors, exceptions, or transaction failures.  \n- Helps maintain system stability by preventing unauthorized or unsupported changes to protected data.  \n- Influences how partial updates and patch operations are processed in the presence of read-only constraints.  \n- Affects error reporting and feedback mechanisms during data validation and update workflows.  \n\n**Implementation guidance:**  \n- Enable this flag to enhance robustness and fault tolerance when integrating with APIs or systems enforcing read-only field restrictions.  \n- Use in scenarios where it is acceptable or expected that read-only fields remain unchanged during update operations.  \n- Ensure downstream business logic, validation rules, and data consistency checks accommodate the possibility that some fields may not be updated.  \n- Carefully evaluate the impact on data integrity, audit requirements, and compliance before enabling this behavior.  \n- Coordinate with audit, logging, and monitoring mechanisms to accurately capture which fields were updated, skipped, or ignored.  \n- Consider the implications for user feedback and error handling in client applications consuming the API.  \n\n**Examples:**  \n- ignoreReadOnlyFields: true — Update operations will silently skip read-only fields, avoiding errors and allowing partial updates to succeed.  \n- ignoreReadOnlyFields: false — Update operations will attempt to modify all fields, potentially triggering errors or exceptions if read-only fields are included.  \n- In a bulk update scenario, setting this to true prevents the entire batch from failing due to attempts to modify read-only fields.  \n- When performing a patch request, enabling this flag allows the system to apply only permissible changes without interruption.  \n\n**Important notes:**  \n- Setting this flag to true does not grant permission to"},"warningAsError":{"type":"boolean","description":"Indicates whether warnings encountered during processing should be escalated and treated as errors, causing the operation to fail immediately upon any warning detection. This setting enforces stricter validation and operational integrity by preventing processes from completing successfully if any warnings arise, thereby ensuring higher data quality, compliance standards, and system reliability. When enabled, it transforms non-critical warnings into blocking errors, which can halt workflows, trigger rollbacks, or initiate error handling routines. This behavior is particularly valuable in environments where data accuracy and regulatory adherence are paramount, but it may also increase failure rates and require more robust error management and user communication strategies.\n\n**Field behavior**\n- When set to true, any warning triggers an error state, halting or failing the operation immediately.\n- When set to false, warnings are logged or reported but do not interrupt the workflow or cause failure.\n- Influences validation, data processing, integration, and transactional workflows where warnings might otherwise be non-blocking.\n- Affects error reporting mechanisms and may alter the flow of retries, rollbacks, or aborts.\n- Can change the overall system behavior by elevating the severity of warnings to errors.\n- Impacts downstream processes by potentially preventing partial or inconsistent data states.\n\n**Implementation guidance**\n- Use to enforce strict data integrity and prevent continuation of processes with potential issues.\n- Assess the impact on user experience, as enabling this may increase failure rates and necessitate enhanced error handling and user notifications.\n- Ensure client applications and integrations are designed to handle errors resulting from escalated warnings gracefully.\n- Clearly document the default behavior when this flag is unset to avoid ambiguity for users and developers.\n- Coordinate with logging, monitoring, and alerting systems to provide clear diagnostics and traceability when warnings are treated as errors.\n- Consider providing configuration options at multiple levels (user, project, global) to allow flexible enforcement.\n- Test thoroughly in staging environments to understand the operational impact before enabling in production.\n\n**Examples**\n- warningAsError: true — Data import fails immediately if any warnings are detected during validation, preventing partial or potentially corrupt data ingestion.\n- warningAsError: false — Warnings during data validation are logged but do not stop the import process, allowing for continued operation despite minor issues.\n- warningAsError: true — Integration workflows abort on any warning to maintain strict compliance with regulatory or business rules.\n- warningAsError: false — Batch processing continues despite warnings, with issues flagged for later review.\n- warningAsError: true —"},"skipCustomMetadataRequests":{"type":"boolean","description":"Indicates whether to bypass requests for custom metadata during API operations, enabling optimized performance by reducing unnecessary data retrieval and processing when custom metadata is not required. This flag allows fine-tuning of the API's behavior to specific use cases by controlling the inclusion of potentially resource-intensive custom metadata. By skipping these requests, the system can achieve faster response times, lower bandwidth usage, and reduced computational overhead, especially beneficial in high-throughput or performance-sensitive scenarios. Conversely, including custom metadata ensures comprehensive data context, which is critical for operations requiring full validation, auditing, or enrichment.  \n**Field behavior:**  \n- When set to true, the system omits fetching and processing any custom metadata associated with the request, improving efficiency and reducing payload size.  \n- When set to false or left unspecified, the system retrieves and processes all relevant custom metadata, ensuring complete data context.  \n- Directly influences the amount of data transmitted and processed, impacting API responsiveness and resource utilization.  \n- Affects downstream workflows that depend on the presence or absence of custom metadata for decision-making or data integrity.  \n**Implementation guidance:**  \n- Employ this flag to optimize performance in scenarios where custom metadata is unnecessary, such as bulk data exports, read-only queries, or environments with stable metadata.  \n- Assess the potential impact on data completeness, validation, auditing, and enrichment to avoid compromising critical business logic.  \n- Clearly communicate the default behavior when the property is omitted to prevent misunderstandings among API consumers.  \n- Enforce boolean validation to maintain consistent and predictable API behavior.  \n- Consider how this setting interacts with caching mechanisms, data synchronization, and other metadata-related preferences to ensure coherent system behavior.  \n**Examples:**  \n- `true`: Skip all custom metadata requests to expedite processing and minimize resource consumption in performance-critical workflows.  \n- `false`: Include custom metadata requests to obtain full metadata details necessary for thorough data handling and validation.  \n**Important notes:**  \n- Skipping custom metadata may lead to incomplete data contexts if downstream processes rely on that metadata for essential functions like validation or auditing.  \n- This setting is most effective when metadata is static, infrequently changed, or irrelevant to the current operation.  \n- Altering this flag can influence caching strategies, data consistency, and synchronization across distributed systems.  \n- Ensure all relevant stakeholders understand the implications of toggling this flag to maintain data integrity and operational correctness.  \n**Dependency chain:**  \n- Relies on the existence and"}},"description":"An object encapsulating the user's customizable settings and options within the NetSuite environment, enabling a highly personalized and efficient user experience. This object encompasses a broad spectrum of configurable preferences including interface layout, language, timezone, notification methods, default currencies, date and time formats, themes, and other user-specific options that directly influence how the system behaves, appears, and interacts with the individual user. Preferences are designed to be flexible, supporting inheritance from role-based or system-wide defaults while allowing users to override settings to suit their unique workflows and requirements. The preferences object supports dynamic retrieval and partial updates, ensuring that changes can be made granularly without affecting unrelated settings, thereby maintaining data integrity and user experience consistency.\n\n**Field behavior**\n- Contains user-specific configuration settings that tailor the NetSuite experience to individual needs and roles.\n- Includes preferences related to UI layout, notification settings, default currencies, date/time formats, language, themes, and other personalization options.\n- Supports dynamic retrieval and partial updates, enabling users to modify individual preferences without overwriting the entire object.\n- Allows inheritance of preferences from role-based or system-wide defaults when user-specific settings are not explicitly defined.\n- Changes to preferences immediately affect the user interface, notification delivery, data presentation, and overall system behavior.\n- Preferences persist across sessions and devices, ensuring a consistent user experience.\n- Supports both simple scalar values and complex nested structures to accommodate diverse configuration needs.\n\n**Implementation guidance**\n- Structure as a nested JSON object with clearly defined and documented sub-properties grouped by categories such as notifications, display settings, localization, and system defaults.\n- Validate all input during updates to ensure data integrity, prevent invalid configurations, and maintain system stability.\n- Implement partial update mechanisms (e.g., PATCH semantics) to allow granular and efficient modifications.\n- Enforce strict access controls to ensure only authorized users can view or modify preferences.\n- Consider versioning the preferences schema to support backward compatibility and future enhancements.\n- Provide comprehensive documentation for each sub-property to facilitate correct usage, integration, and maintenance.\n- Optimize retrieval and update operations for performance, especially in environments with large user bases.\n- Ensure compatibility with role-based access controls and system-wide default settings to maintain coherent preference hierarchies.\n\n**Examples**\n- `{ \"language\": \"en-US\", \"timezone\": \"America/New_York\", \"currency\": \"USD\", \"notifications\": { \"email\": true, \"sms\": false } }`\n- `{ \"dashboardLayout\": \"compact\", \""},"file":{"type":"object","properties":{"name":{"type":"string","description":"The name of the file as it will be identified, stored, and managed within the NetSuite system. This filename serves as the primary unique identifier for the file within its designated folder or context, ensuring precise referencing, retrieval, and manipulation across various NetSuite modules, integrations, and user interfaces. It is critical that the filename is clear, descriptive, and relevant, as it is used not only for backend operations but also for display purposes in reports, dashboards, and file listings. The filename must strictly adhere to NetSuite’s naming conventions and restrictions to prevent conflicts, errors, or access issues, including limitations on character usage and length. Proper naming facilitates efficient file organization, version control, and seamless integration with external systems.\n\n**Field behavior**\n- Acts as the unique identifier for the file within its folder or storage context, preventing duplication.\n- Used extensively in file retrieval, linking, display, and management operations throughout NetSuite.\n- Must be unique within the folder to avoid overwriting or access conflicts.\n- Supports inclusion of standard file extensions to indicate file type and enable appropriate handling.\n- Case sensitivity may vary depending on the operating environment and integration specifics.\n- Represents only the filename without any directory or path information.\n- Changes to the filename after creation can impact existing references or integrations.\n\n**Implementation guidance**\n- Choose a concise, descriptive, and meaningful name to facilitate easy identification and management.\n- Avoid special characters, symbols, or reserved characters (e.g., \\ / : * ? \" < > |) disallowed by NetSuite’s file naming rules.\n- Ensure the filename length complies with NetSuite’s maximum character limit (typically 255 characters).\n- Always include the appropriate file extension (e.g., .pdf, .docx, .png) to clearly denote the file format.\n- Exclude any path separators, folder names, or hierarchical data—only the filename itself should be provided.\n- Adopt consistent naming conventions that support version control, categorization, or organizational standards.\n- Validate filenames programmatically where possible to enforce compliance and prevent errors.\n- Consider the impact of renaming files on existing workflows and references before making changes.\n\n**Examples**\n- \"invoice_2024_06.pdf\"\n- \"project_plan_v2.docx\"\n- \"company_logo.png\"\n- \"financial_report_Q1_2024.xlsx\"\n- \"user_manual_v1.3.pdf\"\n\n**Important notes**\n- The filename must not contain directory or path information; it strictly represents"},"fileType":{"type":"string","description":"The fileType property specifies the precise format or category of the file managed within the NetSuite file system, such as PDF, CSV, JPEG, or other supported types. This designation is essential because it directly affects how the system processes, displays, stores, and interacts with the file throughout its lifecycle. Correctly identifying the fileType ensures that files are rendered correctly in the user interface, processed through appropriate workflows, and subjected to any file-specific restrictions, permissions, or validations. It is typically required when uploading or creating new files to guarantee compatibility, prevent errors during file operations, and maintain data integrity. Additionally, the fileType influences system behaviors such as indexing, searchability, and integration with other NetSuite modules or external systems.\n\n**Field behavior**\n- Defines the file’s format or category, governing system handling, processing, and display.\n- Determines rendering, storage methods, and permissible manipulations within NetSuite.\n- Enables or restricts specific operations based on the file type’s inherent capabilities and limitations.\n- Usually mandatory during file creation or upload to ensure accurate system recognition and handling.\n- Influences validation checks, error handling, and workflow routing during file operations.\n- Affects integration points with other NetSuite features, such as reporting, scripting, or external API interactions.\n\n**Implementation guidance**\n- Validate the fileType against a predefined list of supported NetSuite file types to ensure compatibility and prevent errors.\n- Ensure the fileType accurately reflects the actual content format of the file to avoid processing failures or data corruption.\n- Prefer using standardized MIME types or NetSuite-specific identifiers where applicable for consistency and interoperability.\n- Implement robust error handling to gracefully manage unsupported, unrecognized, or mismatched file types.\n- Consider system versioning, configuration differences, and customizations that may affect supported file types.\n- When changing fileType post-creation, ensure appropriate reprocessing or format conversion to maintain data integrity.\n\n**Examples**\n- \"PDF\" for Portable Document Format files commonly used for documents.\n- \"CSV\" for Comma-Separated Values files frequently used for data exchange.\n- \"JPG\" or \"JPEG\" for image files in JPEG format.\n- \"PLAINTEXT\" for plain text files without formatting.\n- \"EXCEL\" for Microsoft Excel spreadsheet files.\n- \"XML\" for Extensible Markup Language files used in data interchange.\n- \"ZIP\" for compressed archive files.\n\n**Important notes**\n- Consistency between the fileType and the actual file content"},"folder":{"type":"string","description":"The identifier or path of the folder within the NetSuite file cabinet where the file is stored or intended to be stored. This property precisely defines the file's storage location, enabling organized management, categorization, and efficient retrieval within the NetSuite environment. It supports specification either as a unique numeric folder ID or as a string representing the folder path, accommodating both absolute and relative references depending on the API context. Proper assignment ensures files are correctly placed within the hierarchical folder structure, facilitating access control, streamlined file operations, and maintaining organizational consistency.\n\n**Field behavior**\n- Specifies the exact folder location for storing or moving the file within the NetSuite file cabinet.\n- Accepts either a numeric folder ID for unambiguous identification or a string folder path for hierarchical referencing.\n- Mandatory when uploading new files or relocating existing files to define their destination.\n- Optional during file metadata retrieval if folder context is implicit or not required.\n- Updating this property on an existing file triggers relocation to the specified folder.\n- Influences file visibility and access permissions based on folder-level security settings.\n- Supports both absolute and relative folder path formats depending on API usage context.\n\n**Implementation guidance**\n- Confirm the target folder exists and is accessible within the NetSuite file cabinet before assignment.\n- Prefer using the internal numeric folder ID to avoid ambiguity and ensure precise targeting.\n- Support both absolute and relative folder paths where the API context allows.\n- Enforce permission validation to verify that the user or integration has adequate rights to access or modify the folder.\n- Normalize folder paths to comply with NetSuite’s hierarchical structure and naming conventions.\n- Provide informative error responses if the folder is invalid, non-existent, or access is denied.\n- When moving files by updating this property, ensure that any dependent metadata or references are updated accordingly.\n\n**Examples**\n- \"123\" (numeric folder ID representing a specific folder)\n- \"/Documents/Invoices\" (string folder path indicating a nested folder structure)\n- \"456\" (numeric folder ID for a project-specific folder)\n- \"Shared/Marketing/Assets\" (relative folder path within the file cabinet hierarchy)\n\n**Important notes**\n- Folder IDs are unique within the NetSuite account and are the recommended method for precise folder referencing.\n- Folder paths must conform to NetSuite’s naming standards and reflect the correct folder hierarchy.\n- Changing the folder property on an existing file effectively moves the file within the file cabinet.\n- Adequate permissions are required to access or modify the specified folder;"},"folderInternalId":{"type":"string","description":"The internal identifier of the folder within the NetSuite file cabinet where the file is stored. This unique, system-generated integer ID precisely specifies the folder location, enabling accurate file operations such as upload, download, retrieval, and organization. It is essential for associating files with their respective folders and maintaining the hierarchical structure of the file cabinet, ensuring consistent file management and access control.\n\n**Field behavior**\n- Represents a unique, system-assigned internal ID for a folder in the NetSuite file cabinet.\n- Required to specify or retrieve the exact folder location during file management operations.\n- Must correspond to an existing folder within the NetSuite account to ensure valid file placement.\n- Changing this ID effectively moves the file to a different folder within the file cabinet.\n- Permissions on the folder influence access and operations on the associated files.\n- The field is mandatory when creating or updating a file to define its storage location.\n- Supports hierarchical folder structures by linking files to nested folders via their internal IDs.\n\n**Implementation guidance**\n- Validate that the folderInternalId is a valid integer and corresponds to an existing folder before use.\n- Retrieve valid folder internal IDs through NetSuite’s SuiteScript, REST API, or UI to avoid errors.\n- Implement error handling for cases where the folderInternalId is missing, invalid, or points to a restricted folder.\n- Update this ID carefully, as it changes the file’s folder location and may affect file accessibility.\n- Ensure that the user or integration has appropriate permissions to access or modify the target folder.\n- When moving files between folders, confirm that the destination folder supports the file type and intended use.\n- Cache or store folderInternalId values cautiously to prevent stale references in dynamic folder structures.\n\n**Examples**\n- 12345\n- 67890\n- 101112\n\n**Important notes**\n- This ID is distinct from the folder name; it is a system-generated unique identifier.\n- Modifying the folderInternalId moves the file to a different folder, impacting file organization.\n- Folder permissions and sharing settings can restrict or enable file operations based on this ID.\n- Accurate use of this ID is critical for maintaining the integrity of the file cabinet’s folder hierarchy.\n- The folderInternalId cannot be arbitrarily assigned; it must be obtained from NetSuite’s system.\n- Changes to folder structure or deletion of folders may invalidate existing folderInternalId references.\n\n**Dependency chain**\n- Depends on the existence and structure of folders within the NetSuite"},"internalId":{"type":"string","description":"Unique identifier assigned internally to a file within the NetSuite system, serving as the primary key for that file record. This identifier is essential for programmatic referencing and management of the file, enabling precise operations such as retrieval, update, or deletion. The internalId is immutable once assigned and guaranteed to be unique across all file records, ensuring accurate and consistent identification. It is automatically generated by NetSuite upon file creation and is consistently used across both REST and SOAP API endpoints to perform file-related actions. This identifier is critical for maintaining data integrity, enabling seamless integration, and supporting reliable file management workflows within the NetSuite ecosystem.\n\n**Field behavior**\n- Acts as the definitive and immutable identifier for a file within NetSuite.\n- Must be unique for each file record to prevent conflicts.\n- Required for all API operations targeting a specific file, including retrieval, updates, and deletions.\n- Cannot be manually assigned, duplicated, or reassigned to another file.\n- Remains constant throughout the lifecycle of the file record.\n\n**Implementation guidance**\n- Always retrieve the internalId from NetSuite responses after file creation or queries.\n- Use the internalId exclusively for all subsequent API calls involving the file.\n- Avoid any manual assignment or modification of the internalId to prevent inconsistencies.\n- Validate that the internalId conforms to NetSuite’s expected format before use in API calls.\n- Handle errors gracefully when an invalid or non-existent internalId is provided.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"1001\"\n\n**Important notes**\n- Providing an incorrect or non-existent internalId will result in API errors or failed operations.\n- The internalId is distinct and separate from the file name, file path, or any external identifiers.\n- It plays a critical role in maintaining data integrity and ensuring accurate file referencing within NetSuite.\n- The internalId cannot be reused or recycled for different files.\n\n**Dependency chain**\n- Requires the existence of the corresponding file record within NetSuite.\n- Utilized by API methods such as get, update, and delete for file management operations.\n- May be referenced by other NetSuite entities or records that link to the file.\n- Dependent on NetSuite’s internal database and indexing mechanisms for uniqueness and retrieval.\n\n**Technical details**\n- Represented as a string or numeric value depending on the API context.\n- Automatically assigned by NetSuite during the file creation process.\n- Stored as the primary key in the Net"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the unique internal identifier assigned by NetSuite to a specific folder within the file cabinet designated exclusively for storing backup files. This identifier is essential for accurately locating, managing, and organizing backup data within the system’s hierarchical file storage structure. It ensures that all backup operations—such as file creation, modification, and retrieval—are precisely targeted to the correct folder, thereby facilitating reliable data preservation, streamlined backup workflows, and efficient recovery processes.  \n**Field behavior:**  \n- Uniquely identifies the backup folder within the NetSuite file cabinet, eliminating ambiguity in folder selection.  \n- Used to specify or retrieve the exact storage location for backup files during automated or manual backup operations.  \n- Typically assigned and referenced during backup configuration, file management tasks, or programmatic interactions with the NetSuite API.  \n- Enables seamless integration with automated backup processes by directing files to the intended folder without manual intervention.  \n**Implementation guidance:**  \n- Verify that the internal ID corresponds to an existing, active, and accessible folder within the NetSuite file cabinet before use.  \n- Ensure the designated folder has appropriate permissions set to allow creation, modification, and retrieval of backup files.  \n- Avoid using internal IDs of folders that are restricted, archived, deleted, or otherwise unsuitable for active backup storage to prevent errors.  \n- Maintain consistent use of this ID across all API calls, scripts, and backup configurations to uphold data integrity and organizational consistency.  \n**Examples:**  \n- \"12345\" (example of a valid internal folder ID)  \n- \"67890\"  \n- \"34567\"  \n**Important notes:**  \n- This identifier is internal to NetSuite and does not correspond to external folder names, display names, or file system paths.  \n- Changing this ID will redirect backup files to a different folder, which may disrupt existing backup workflows or data organization.  \n- Proper folder permissions are essential to avoid backup failures, access denials, or data loss.  \n- Once set for a given backup configuration, this ID should be treated as immutable unless a deliberate and well-documented change is required.  \n**Dependency chain:**  \n- Requires the target folder to exist within the NetSuite file cabinet and be properly configured for backup storage.  \n- Interacts closely with backup configuration settings, file management APIs, and NetSuite’s permission and security models.  \n- Dependent on NetSuite’s internal folder management system to maintain uniqueness and integrity of folder"}},"description":"Represents a comprehensive file object within the NetSuite system, serving as a fundamental entity for uploading, storing, managing, and referencing a wide variety of file types including documents, images, spreadsheets, audio, video, and other media formats. This object facilitates seamless integration and association of file data with NetSuite records, transactions, custom entities, and workflows, enabling efficient document management, retrieval, and automation across the platform. It supports detailed metadata management such as file name, type, size, folder location, internal identifiers, encoding formats, and timestamps, ensuring precise control, traceability, and auditability of files. The file object can represent both files stored internally in the NetSuite file Cabinet and externally referenced files via URLs or other means, providing flexibility in file handling and access. It enforces platform-specific constraints on file size, type, and permissions to maintain system integrity, security, and compliance with organizational policies. Additionally, it supports versioning and updates to existing files, allowing for effective file lifecycle management within NetSuite.\n\n**Field behavior**\n- Used to specify, upload, retrieve, update, or link files associated with NetSuite records, transactions, custom entities, or workflows.\n- Supports a broad spectrum of file types including PDFs, images (JPEG, PNG, GIF), spreadsheets (Excel, CSV), text documents, audio, video, and other media formats.\n- Can represent files stored internally within the NetSuite file Cabinet or externally referenced files via URLs or other means.\n- Enables attachment of files to records to provide enhanced context, documentation, and audit trails.\n- Manages comprehensive file metadata including internalId, name, fileType, size, folder location, encoding, and timestamps.\n- Supports versioning and updates to existing files where applicable.\n- Enforces platform-specific restrictions on file size, type, and access permissions.\n- Facilitates secure file access and sharing based on user roles and permissions.\n\n**Implementation guidance**\n- Include complete and accurate metadata such as internalId, name, fileType, size, folder, encoding, and timestamps to ensure reliable file referencing and management.\n- Validate file types and sizes against NetSuite’s platform restrictions prior to upload to prevent errors and ensure compliance.\n- Use appropriate encoding methods (e.g., base64) for file content during API transmission to maintain data integrity.\n- Employ multipart/form-data encoding for file uploads when required by the API specifications.\n- Ensure users have the necessary roles and permissions to upload, retrieve, or modify files"}}},"NetsuiteDistributed":{"type":"object","description":"Configuration for NetSuite Distributed (SuiteApp 2.0) import operations.\nThis is the primary sub-schema for NetSuiteDistributedImport adaptorType.\nThe API field name is \"netsuite_da\".\n\n**Key fields**\n- operation: REQUIRED — the NetSuite operation (add, update, addupdate, etc.)\n- recordType: REQUIRED — the NetSuite record type (e.g., \"customer\", \"salesOrder\")\n- mapping: field mappings from source data to NetSuite fields\n- internalIdLookup: how to find existing records for update/addupdate/delete\n- restletVersion: defaults to \"suiteapp2.0\", rarely needs to be changed\n","properties":{"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"The NetSuite operation to perform. REQUIRED. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create new records only.\n- \"update\" — Update existing records. Requires internalIdLookup.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. Most common.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete records. Requires internalIdLookup.\n\n**Guidance**\n- Default to \"addupdate\" when user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when user says \"create\" or \"insert\" without mentioning updates.\n- For \"update\", \"addupdate\", and \"delete\", you MUST also set internalIdLookup.\n\n**Important**\nThis field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\nDo NOT send {\"type\": \"addupdate\"} — that causes a Cast error.\n"},"recordType":{"type":"string","description":"The NetSuite record type to import into. REQUIRED.\n\nMust match a valid NetSuite record type identifier.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"purchaseOrder\"\n- \"vendorBill\"\n- \"itemFulfillment\"\n- \"employee\"\n- \"inventoryItem\"\n- \"customrecord_myrecord\" (custom record types)\n"},"recordIdentifier":{"type":"string","description":"Custom record identifier, used to identify the specific record type when\nimporting into custom record types.\n"},"recordTypeId":{"type":"string","description":"The internal record type ID. Used for custom record types in NetSuite where\nthe numeric ID is needed in addition to the recordType string.\n"},"restletVersion":{"type":"string","enum":["suitebundle","suiteapp1.0","suiteapp2.0"],"description":"The version of the NetSuite RESTlet to use. This is a plain STRING, NOT an object.\n\nDefaults to \"suiteapp2.0\" when useSS2Restlets is true, \"suitebundle\" otherwise.\nAlmost always \"suiteapp2.0\" for modern integrations — rarely needs to be explicitly set.\n\nIMPORTANT: Set as a plain string value, e.g. \"suiteapp2.0\"\nDo NOT send {\"type\": \"suiteapp2.0\"} — that causes a Cast error.\n"},"useSS2Restlets":{"type":"boolean","description":"Whether to use SuiteScript 2.0 RESTlets. Defaults to true for modern integrations.\nWhen true, restletVersion defaults to \"suiteapp2.0\".\n"},"missingOrCorruptedDAConfig":{"type":"boolean","description":"Flag indicating whether the Distributed Adaptor configuration is missing or corrupted.\nSet by the system — do not set manually.\n"},"batchSize":{"type":"number","description":"Number of records to process per batch. Controls how many records are sent\nto NetSuite in a single API call. Typical values: 50-200.\n"},"internalIdLookup":{"type":"object","description":"Configuration for looking up existing NetSuite records by internal ID.\nREQUIRED when operation is \"update\", \"addupdate\", or \"delete\".\n\nDefines how to find existing records in NetSuite to match against incoming data.\n","properties":{"extract":{"type":"string","description":"The path in the source record to extract the lookup value from.\nExample: \"internalId\" or \"externalId\"\n"},"searchField":{"type":"string","description":"The NetSuite field to search against.\nExample: \"externalId\", \"email\", \"name\", \"tranId\"\n"},"operator":{"type":"string","description":"The comparison operator for the lookup.\nExample: \"is\", \"contains\", \"startswith\"\n"},"expression":{"type":"string","description":"A NetSuite search expression for complex lookup conditions.\nUsed for multi-field or conditional lookups.\n"}}},"hooks":{"type":"object","description":"Script hooks for custom processing at different stages of the import.\nEach hook references a SuiteScript file and function.\n","properties":{"preMap":{"type":"object","description":"Runs before field mapping is applied.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postMap":{"type":"object","description":"Runs after field mapping, before submission to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postSubmit":{"type":"object","description":"Runs after the record is submitted to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}}}},"mapping":{"type":"object","description":"Field mappings that define how source data fields map to NetSuite record fields.\nContains both body-level fields and sublist (line-item) fields.\n","properties":{"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the source record to extract the value from.\nUse dot notation for nested fields (e.g., \"address.city\").\n"},"generate":{"type":"string","description":"The NetSuite field ID to write the value to (e.g., \"companyname\", \"email\", \"subsidiary\").\n"},"hardCodedValue":{"type":"string","description":"A static value to always use instead of extracting from source data.\nMutually exclusive with extract.\n"},"lookupName":{"type":"string","description":"Reference to a lookup defined in netsuite_da.lookups by name."},"dataType":{"type":"string","description":"Data type hint for the field value (e.g., \"string\", \"number\", \"date\", \"boolean\").\n"},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"immutable":{"type":"boolean","description":"When true, this field is only set on record creation, not on updates."},"discardIfEmpty":{"type":"boolean","description":"When true, skip this field mapping if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value (e.g., \"MM/DD/YYYY\", \"ISO8601\").\nUsed to parse date strings from source data.\n"},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value (e.g., \"America/New_York\")."},"subRecordMapping":{"type":"object","description":"Nested mapping for NetSuite subrecord fields (e.g., address, inventory detail)."},"conditional":{"type":"object","description":"Conditional logic for when to apply this field mapping.\n","properties":{"lookupName":{"type":"string","description":"Lookup to evaluate for the condition."},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"],"description":"When to apply this mapping:\n- record_created: only on new records\n- record_updated: only on existing records\n- extract_not_empty: only when extracted value is non-empty\n- lookup_not_empty: only when lookup returns a value\n- lookup_empty: only when lookup returns no value\n- ignore_if_set: skip if field already has a value\n"}}}}},"description":"Body-level field mappings. Each entry maps a source field to a NetSuite body field.\n"},"lists":{"type":"array","items":{"type":"object","properties":{"generate":{"type":"string","description":"The NetSuite sublist ID (e.g., \"item\" for sales order line items,\n\"addressbook\" for address sublists).\n"},"jsonPath":{"type":"string","description":"JSON path in the source data that contains the array of sublist records.\n"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the sublist record to extract the value from."},"generate":{"type":"string","description":"NetSuite sublist field ID to write to."},"hardCodedValue":{"type":"string","description":"Static value for this sublist field."},"lookupName":{"type":"string","description":"Reference to a lookup by name."},"dataType":{"type":"string","description":"Data type hint for the field value."},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"isKey":{"type":"boolean","description":"Whether this field is a key field for matching existing sublist lines."},"immutable":{"type":"boolean","description":"Only set on record creation, not updates."},"discardIfEmpty":{"type":"boolean","description":"Skip if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value."},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value."},"subRecordMapping":{"type":"object","description":"Nested mapping for subrecord fields within the sublist."},"conditional":{"type":"object","properties":{"lookupName":{"type":"string"},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"]}}}}},"description":"Field mappings for each column in the sublist."}}},"description":"Sublist (line-item) mappings. Each entry maps source data to a NetSuite sublist.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique name for this lookup, referenced by lookupName in field mappings.\n"},"recordType":{"type":"string","description":"NetSuite record type to search (e.g., \"customer\", \"item\").\n"},"searchField":{"type":"string","description":"NetSuite field to search against (e.g., \"email\", \"externalId\", \"name\").\n"},"resultField":{"type":"string","description":"Field from the lookup result to return (e.g., \"internalId\").\n"},"expression":{"type":"string","description":"NetSuite search expression for complex lookup conditions.\n"},"operator":{"type":"string","description":"Comparison operator (e.g., \"is\", \"contains\", \"startswith\").\n"},"includeInactive":{"type":"boolean","description":"Whether to include inactive records in lookup results."},"useDefaultOnMultipleMatches":{"type":"boolean","description":"When true, use the default value if multiple records match the lookup."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results (key-value pairs)."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup definitions used to resolve reference values from NetSuite.\nReferenced by name from field mappings via the lookupName property.\n"},"rawOverride":{"type":"object","description":"Raw override object for advanced use cases. When useRawOverride is true,\nthis object is sent directly to the NetSuite API, bypassing normal mapping.\n"},"useRawOverride":{"type":"boolean","description":"When true, uses rawOverride instead of the normal mapping configuration.\n"},"isMigrated":{"type":"boolean","description":"System flag indicating whether this import was migrated from a legacy format.\n"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"When true, silently skip read-only fields instead of raising errors."},"warningAsError":{"type":"boolean","description":"When true, treat NetSuite warnings as errors that stop the import."},"skipCustomMetadataRequests":{"type":"boolean","description":"When true, skip fetching custom field metadata to improve performance."}},"description":"Import behavior preferences that control how NetSuite handles the import operation.\n"},"retryUpdateAsAdd":{"type":"boolean","description":"When true, if an update fails because the record doesn't exist, automatically\nretry as an add operation. Useful for initial syncs where records may not exist yet.\n"},"isFileProvider":{"type":"boolean","description":"Whether this import handles files in the NetSuite File Cabinet.\n"},"file":{"type":"object","description":"File cabinet configuration for file-based imports into NetSuite.\n","properties":{"name":{"type":"string","description":"Filename for the file in NetSuite File Cabinet."},"fileType":{"type":"string","description":"NetSuite file type (e.g., \"PDF\", \"CSV\", \"PLAINTEXT\", \"EXCEL\", \"XML\").\n"},"folder":{"type":"string","description":"Folder path or name in the NetSuite File Cabinet."},"folderInternalId":{"type":"string","description":"Internal ID of the target folder in NetSuite File Cabinet."},"internalId":{"type":"string","description":"Internal ID of an existing file to update."},"backupFolderInternalId":{"type":"string","description":"Internal ID of a backup folder for file versioning."}}},"customFieldMetadata":{"type":"object","description":"Metadata about custom fields on the target record type.\nPopulated by the system from NetSuite metadata — do not set manually.\n"}}},"Rdbms":{"type":"object","description":"Configuration for RDBMS import operations. Used for database imports into SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, and other relational databases.\n\n**Query type determines which fields are required**\n\n| queryType          | Required fields                | Do NOT set        |\n|--------------------|--------------------------------|-------------------|\n| [\"per_record\"]     | query (array of SQL strings)   | bulkInsert        |\n| [\"per_page\"]       | query (array of SQL strings)   | bulkInsert        |\n| [\"bulk_insert\"]    | bulkInsert object              | query             |\n| [\"bulk_load\"]      | bulkLoad object                | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"MERGE INTO target USING ...\"]\nWrong: \"MERGE INTO target ...\"\nWrong: [{\"query\": \"MERGE INTO target ...\"}]\n","properties":{"lookups":{"type":["string","null"],"description":"A collection of predefined reference data sets or key-value pairs designed to standardize, validate, and streamline input values across the application. These lookup entries serve as a centralized repository of commonly used values, enabling consistent data entry, minimizing errors, and facilitating efficient data retrieval and processing. They are essential for populating user interface elements such as dropdown menus, autocomplete fields, and for enforcing validation rules within business logic. Lookup data can be static or dynamically updated, and may support localization to accommodate diverse user bases. Additionally, lookups often include metadata such as descriptions, effective dates, and status indicators to provide context and support lifecycle management. They play a critical role in maintaining data integrity, enhancing user experience, and ensuring interoperability across different system modules and external integrations.\n\n**Field behavior**\n- Contains structured sets of reference data including codes, labels, enumerations, or mappings.\n- Drives UI components by providing selectable options and autocomplete suggestions.\n- Ensures data consistency by standardizing input values across different modules.\n- Supports both static and dynamic updates to reflect changes in business requirements.\n- May include metadata like descriptions, effective dates, or status indicators for enhanced context.\n- Facilitates localization and internationalization to support multiple languages and regions.\n- Enables validation logic by restricting inputs to predefined acceptable values.\n- Supports versioning to track changes and maintain historical data integrity.\n\n**Implementation guidance**\n- Organize lookup data as dictionaries, arrays, or database tables for efficient access and management.\n- Implement caching strategies to optimize performance and reduce redundant data retrieval.\n- Enforce uniqueness and relevance of lookup entries within their specific contexts.\n- Provide robust mechanisms for updating, versioning, and extending lookup data without disrupting system stability.\n- Incorporate localization and internationalization support for user-facing lookup values.\n- Secure sensitive lookup data with appropriate access controls and auditing.\n- Design APIs or interfaces for easy retrieval and management of lookup data.\n- Ensure synchronization of lookup data across distributed systems or microservices.\n\n**Examples**\n- Country codes and names: {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n- Status codes representing entity states: {\"1\": \"Active\", \"2\": \"Inactive\", \"3\": \"Pending\"}\n- Product categories for e-commerce platforms: {\"ELEC\": \"Electronics\", \"FASH\": \"Fashion\", \"HOME\": \"Home & Garden\"}\n- Payment methods: {\"CC\": \"Credit Card\", \"PP\": \"PayPal\", \"BT\":"},"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"], [\"per_page\"], or [\"first_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars to inject values from incoming records. RDBMS queries REQUIRE the `record.` prefix and triple braces `{{{ }}}`.\n\n**Format — array of strings**\nThis field is an ARRAY OF STRINGS, not a single string and not an array of objects.\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n- WRONG: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]  (missing record. prefix — will produce empty values)\n\n**Critical:** RDBMS HANDLEBARS SYNTAX\n- MUST use `record.` prefix: `{{{record.fieldName}}}`, NOT `{{{fieldName}}}`\n- MUST use triple braces `{{{ }}}` for value references — in RDBMS context, `{{record.field}}` outputs `'value'` (wrapped in single quotes), while `{{{record.field}}}` outputs `value` (raw). Triple braces give you control over quoting in your SQL.\n- Block helpers (`{{#each}}`, `{{#if}}`, `{{/each}}`) use double braces as normal.\n- Nested fields: `{{{record.properties.email}}}`\n\n**Examples**\n- INSERT: [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{{record.quantity}}} WHERE sku = '{{{record.sku}}}'\"]\n- UPSERT (MySQL): [\"INSERT INTO users (id, name) VALUES ({{{record.id}}}, '{{{record.name}}}') ON DUPLICATE KEY UPDATE name = '{{{record.name}}}'\"]\n- MERGE (Snowflake): [\"MERGE INTO target USING (SELECT '{{{record.email}}}' AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = '{{{record.name}}}' WHEN NOT MATCHED THEN INSERT (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{{record.name}}}'\n- Numbers: no quotes — {{{record.quantity}}}\n"},"queryType":{"type":["array","null"],"items":{"type":"string","enum":["INSERT","UPDATE","bulk_insert","per_record","per_page","first_page","bulk_load"]},"description":"Specifies the execution strategy for the SQL operation.\n\n**CRITICAL: This field DETERMINES whether `query` or `bulkInsert` is required.**\n\n**Preference order (use the highest-performance option the operation supports)**\n\n1. **`[\"bulk_load\"]`** — fastest. Stages data as a file then loads via COPY/bulk mechanism. Supports upsert via `primaryKeys`, and custom merge/ignore logic via `overrideMergeQuery`. **Currently Snowflake and NSAW.**\n2. **`[\"bulk_insert\"]`** — batch INSERT via multi-row VALUES. No upsert/ignore logic. All RDBMS types.\n3. **`[\"per_page\"]`** — one SQL statement per page of records. All RDBMS types.\n4. **`[\"per_record\"]`** — one SQL statement per record. Full control over individual record SQL. All RDBMS types.\n\nAlways prefer the highest option that satisfies the operation. `bulk_load` handles INSERT, UPSERT, and custom merge logic. `bulk_insert` is best for pure INSERTs when `bulk_load` isn't available. `per_page` handles custom SQL at batch level. `per_record` gives full control over individual record SQL.\n\n**Decision tree (Follow in order)**\n\n**STEP 1: Does the database support bulk_load?**\n- If Snowflake, NSAW, or another database with bulk_load support:\n  → **USE `[\"bulk_load\"]`** with:\n    - `bulkLoad.tableName` (required)\n    - `bulkLoad.primaryKeys` for upsert/merge (auto-generates MERGE)\n    - `bulkLoad.overrideMergeQuery: true` for custom SQL (ignore-existing, conditional updates, multi-table ops). Override SQL references `{{import.rdbms.bulkLoad.preMergeTemporaryTable}}` for the staging table.\n    - No `primaryKeys` for pure INSERT (auto-generates INSERT)\n\n**STEP 2: Database does NOT support bulk_load**\n- **Pure INSERT**: → **USE `[\"bulk_insert\"]`** (requires `bulkInsert` object)\n- **UPDATE / UPSERT / custom matching logic**: → **USE `[\"per_page\"]`** (requires `query` field). Only use `[\"per_record\"]` if the logic truly cannot be expressed as a batch operation.\n\n**Critical relationship to other fields**\n\n| queryType        | REQUIRES         | DO NOT SET       |\n|------------------|------------------|------------------|\n| `[\"per_record\"]` | `query` field    | `bulkInsert`     |\n| `[\"bulk_insert\"]`| `bulkInsert` obj | `query`          |\n\n**Do not use**\n- `[\"INSERT\"]` or `[\"UPDATE\"]` as standalone values\n- Multiple types combined\n\n**Examples**\n\n**Insert with ignoreExisting (MUST use per_record)**\nPrompt: \"Import customers, ignore existing based on email\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n```\n\n**Pure insert (no duplicate checking)**\nPrompt: \"Insert all records into users table\"\n```json\n\"queryType\": [\"bulk_insert\"],\n\"bulkInsert\": { ... }\n```\n\n**Update Operation**\nPrompt: \"Update inventory counts\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"UPDATE inventory SET qty = {{{record.qty}}} WHERE sku = '{{{record.sku}}}'\"]\n```\n\n**Run Once Per Page**\nPrompt: \"Execute this stored procedure for each page of results\"\n```json\n\"queryType\": [\"per_page\"]\n```\n**per_page uses a different Handlebars context than per_record.** The context is the entire batch (`batch_of_records`), not a single `record.` object. Loop through with `{{#each batch_of_records}}...{{{record.fieldName}}}...{{/each}}`.\n\n**Run Once at Start**\nPrompt: \"Run this cleanup script before processing\"\n```json\n\"queryType\": [\"first_page\"]\n```"},"bulkInsert":{"type":"object","description":"**CRITICAL: This object is ONLY used when `queryType` is `[\"bulk_insert\"]`.**\n\n✅ **SET THIS** when:\n- `queryType` is `[\"bulk_insert\"]`\n- Pure INSERT operation with NO ignoreExisting/duplicate checking\n\n❌ **DO NOT SET THIS** when:\n- `queryType` is `[\"per_record\"]` (use `query` field instead)\n- Operation involves UPDATE, UPSERT, or ignoreExisting logic\n\nIf the prompt mentions \"ignore existing\", \"skip duplicates\", or \"match on field\",\nuse `queryType: [\"per_record\"]` with the `query` field instead of this object.","properties":{"tableName":{"type":"string","description":"The name of the database table into which the bulk insert operation will be executed. This value must correspond to a valid, existing table within the target relational database management system (RDBMS). It serves as the primary destination for inserting multiple rows of data efficiently in a single operation. The table name can include schema or namespace qualifiers if supported by the database (e.g., \"schemaName.tableName\"), allowing precise targeting within complex database structures. Proper validation and sanitization of this value are essential to ensure the operation's success and to prevent SQL injection or other security vulnerabilities.\n\n**Field behavior**\n- Specifies the exact destination table for the bulk insert operation.\n- Must correspond to an existing table in the database schema.\n- Case sensitivity and naming conventions depend on the underlying RDBMS.\n- Supports schema-qualified names where applicable.\n- Used directly in the SQL `INSERT INTO` statement.\n- Influences how data is mapped and inserted during the bulk operation.\n\n**Implementation guidance**\n- Verify the existence and accessibility of the table in the target database before execution.\n- Ensure compliance with the RDBMS naming rules, including reserved keywords, allowed characters, and maximum length.\n- Support and correctly handle schema-qualified table names, respecting database-specific syntax.\n- Sanitize and validate input rigorously to prevent SQL injection and other security risks.\n- Apply appropriate quoting or escaping mechanisms based on the RDBMS (e.g., backticks for MySQL, double quotes for PostgreSQL).\n- Consider the impact of case sensitivity, especially when dealing with quoted identifiers.\n\n**Examples**\n- \"users\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"sales_data_2024\"\n- \"dbo.CustomerRecords\"\n\n**Important notes**\n- The target table must exist and have the appropriate schema and permissions to accept bulk inserts.\n- Incorrect, misspelled, or non-existent table names will cause the operation to fail.\n- Case sensitivity varies by database; for example, PostgreSQL treats quoted identifiers as case-sensitive.\n- Avoid using unvalidated dynamic input to mitigate security vulnerabilities.\n- Schema qualifiers should be used consistently to avoid ambiguity in multi-schema environments.\n\n**Dependency chain**\n- Relies on the database connection and authentication configuration.\n- Interacts with other bulkInsert parameters such as column mappings and data payload.\n- May be influenced by transaction management, locking mechanisms, and database constraints during the insert.\n- Dependent on the database's metadata for validation and existence checks.\n\n**Techn**"},"batchSize":{"type":"string","description":"The number of records to be inserted into the database in a single batch during a bulk insert operation. This parameter is crucial for optimizing the performance and efficiency of bulk data loading by controlling how many records are grouped together before being sent to the database. Proper tuning of batchSize balances memory consumption, transaction overhead, and throughput, enabling the system to handle large volumes of data efficiently without overwhelming resources or causing timeouts. Adjusting batchSize directly impacts transaction size, network utilization, error handling granularity, and recovery strategies, making it essential to tailor this value based on the specific database capabilities, system resources, and workload characteristics.\n\n**Field behavior**\n- Specifies the exact count of records processed and committed in a single batch during bulk insert operations.\n- Determines the frequency and size of database transactions, influencing overall throughput, latency, and system responsiveness.\n- Larger batch sizes can improve throughput but may increase memory usage, transaction duration, and risk of timeouts or locks.\n- Smaller batch sizes reduce memory footprint and transaction time but may increase the total number of transactions and associated overhead.\n- Defines the scope of error handling, as failures typically affect only the current batch, allowing for partial retries or rollbacks.\n- Controls the granularity of commit points, impacting rollback, recovery, and consistency strategies in case of failures.\n\n**Implementation guidance**\n- Choose a batch size that respects the database’s transaction limits, available system memory, and network conditions.\n- Conduct benchmarking and load testing with different batch sizes to identify the optimal balance for your environment.\n- Monitor system performance continuously and adjust batchSize dynamically if supported, to adapt to varying workloads.\n- Implement comprehensive error handling to manage partial batch failures, including retry logic or compensating transactions.\n- Verify compatibility with database drivers, ORM frameworks, or middleware, which may impose constraints or optimizations on batch sizes.\n- Consider the size, complexity, and serialization overhead of individual records, as larger or more complex records may necessitate smaller batches.\n- Factor in network latency and bandwidth to optimize data transfer efficiency and reduce potential bottlenecks.\n\n**Examples**\n- 1000: Suitable for moderate bulk insert operations, balancing speed and resource consumption effectively.\n- 50000: Ideal for high-throughput environments with ample memory and finely tuned database configurations.\n- 100: Appropriate for systems with limited memory or where minimizing transaction size and duration is critical.\n- 5000: A common default batch size providing a good compromise between performance and resource usage.\n\n**Important**"}}},"bulkLoad":{"type":"object","properties":{"tableName":{"type":"string","description":"The name of the target table within the relational database management system (RDBMS) where the bulk load operation will insert, update, or merge data. This property specifies the exact destination table for the bulk data operation and must correspond to a valid, existing table in the database schema. It can include schema qualifiers if supported by the database (e.g., schema.tableName), and must adhere to the naming conventions and case sensitivity rules of the target RDBMS. Proper specification of this property is critical to ensure data is loaded into the correct location without errors or unintended data modification. Accurate definition of the table name helps maintain data integrity and prevents operational failures during bulk load processes.\n\n**Field behavior**\n- Identifies the specific table in the database to receive the bulk-loaded data.\n- Must reference an existing table accessible with the current database credentials.\n- Supports schema-qualified names where applicable (e.g., schema.tableName).\n- Used as the target for insert, update, or merge operations during bulk load.\n- Case sensitivity and naming conventions depend on the underlying database system.\n- Determines the scope and context of the bulk load operation within the database.\n- Influences transactional behavior, locking, and concurrency controls during the operation.\n\n**Implementation guidance**\n- Verify the target table exists and the executing user has sufficient permissions for data modification.\n- Ensure the table name respects the case sensitivity and naming rules of the RDBMS.\n- Avoid reserved keywords or special characters unless properly escaped or quoted according to database requirements.\n- Support fully qualified names including schema or database prefixes if required to avoid ambiguity.\n- Validate compatibility between the table schema and the incoming data to prevent type mismatches or constraint violations.\n- Consider transactional behavior, locking, and concurrency implications on the target table during bulk load.\n- Test the bulk load operation in a development or staging environment to confirm correct table targeting.\n- Implement error handling to catch and report issues related to invalid or inaccessible table names.\n\n**Examples**\n- \"customers\"\n- \"sales_data_2024\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"dbo.EmployeeRecords\"\n\n**Important notes**\n- Incorrect or misspelled table names will cause the bulk load operation to fail.\n- The target table schema must be compatible with the incoming data structure to avoid errors.\n- Bulk loading may overwrite, append, or merge data depending on the load mode; ensure this aligns with business requirements.\n- Some databases require quoting or escaping of table names, especially if they"},"primaryKeys":{"type":["string","null"],"description":"primaryKeys specifies the list of column names that uniquely identify each record in the target relational database table during a bulk load operation. These keys are essential for maintaining data integrity by enforcing uniqueness constraints and enabling precise identification of rows for operations such as inserts, updates, upserts, or conflict resolution. Properly defining primaryKeys ensures that duplicate records are detected and handled appropriately, preventing data inconsistencies and supporting efficient data merging processes. This property supports both single-column and composite primary keys, requiring the exact column names as defined in the target schema, and plays a critical role in guiding the bulk load mechanism to correctly match and manipulate records based on their unique identifiers.  \n**Field behavior:**  \n- Defines one or more columns that collectively serve as the unique identifier for each row in the table.  \n- Enforces uniqueness constraints during bulk load operations to prevent duplicate entries.  \n- Facilitates detection and resolution of conflicts or duplicates during data insertion or updating.  \n- Influences the behavior of upsert, merge, or conflict resolution mechanisms in the bulk load process.  \n- Ensures that each record can be reliably matched and updated based on the specified keys.  \n- Supports both single-column and composite keys, maintaining the order of columns as per the database schema.  \n**Implementation guidance:**  \n- Include all columns that form the complete primary key, especially for composite keys, maintaining the correct order as defined in the database schema.  \n- Ensure column names exactly match those in the target database, respecting case sensitivity where applicable.  \n- Validate that all specified primary key columns exist in both the source data and the target table schema before initiating the bulk load.  \n- Avoid leaving the primaryKeys list empty when uniqueness enforcement or conflict resolution is required.  \n- Consider the immutability and stability of primary key columns to prevent inconsistencies during repeated load operations.  \n- Confirm that the primary key columns are indexed or constrained appropriately in the target database to optimize performance.  \n**Examples:**  \n- [\"id\"] — single-column primary key.  \n- [\"order_id\", \"product_id\"] — composite primary key consisting of two columns.  \n- [\"user_id\", \"timestamp\"] — composite key combining user identifier and timestamp for uniqueness.  \n- [\"customer_id\", \"account_number\", \"region\"] — multi-column composite primary key.  \n**Important notes:**  \n- Omitting primaryKeys when required can result in data duplication, failed loads, or incorrect conflict handling.  \n- The order of"},"overrideMergeQuery":{"type":"boolean","description":"A custom SQL query string that fully overrides the default merge operation executed during bulk load processes in a relational database management system (RDBMS). This property empowers users to define precise, tailored merge logic that governs how records are inserted, updated, or deleted in the target database table when handling large-scale data operations. By specifying this query, users can implement complex matching conditions, custom conflict resolution strategies, additional filtering criteria, or conditional logic that surpasses the system-generated default behavior, ensuring the bulk load aligns perfectly with specific business rules, data integrity requirements, or performance optimizations.\n\n**Field behavior**\n- When provided, this query completely replaces the standard merge statement used in bulk load operations.\n- Supports detailed customization of merge logic, including custom join conditions, conditional updates, selective inserts, and optional deletes.\n- If omitted, the system automatically generates and executes a default merge query based on the schema, keys, and data mappings.\n- The query must be syntactically compatible with the target RDBMS and support any required parameterization for dynamic data injection.\n- Executed within the transactional context of the bulk load to maintain atomicity, consistency, and rollback capabilities in case of failure.\n- The system expects the query to handle all necessary merge scenarios to avoid partial or inconsistent data states.\n- Overrides apply globally for the bulk load operation, affecting all records processed in that batch.\n\n**Implementation guidance**\n- Validate the custom SQL syntax thoroughly before execution to prevent runtime errors and ensure compatibility with the target RDBMS.\n- Ensure the query comprehensively addresses all required merge operations—insert, update, and optionally delete—to maintain data integrity.\n- Support parameter placeholders or bind variables if the system injects dynamic values during execution, and document their usage clearly.\n- Provide users with clear documentation, templates, or examples outlining the expected query structure, required clauses, and best practices.\n- Implement robust safeguards against SQL injection and other security vulnerabilities when accepting and executing custom queries.\n- Test the custom query extensively in a controlled staging environment to verify correctness, performance, and side effects before deploying to production.\n- Consider transaction isolation levels and locking behavior to avoid deadlocks or contention during bulk load operations.\n- Encourage users to include comprehensive error handling and logging within the query or surrounding execution context to facilitate troubleshooting.\n\n**Examples**\n- A MERGE statement that updates existing records based on a composite primary key and inserts new records when no match is found.\n- A merge query incorporating additional WHERE clauses to exclude certain"}},"description":"Specifies whether bulk loading is enabled for relational database management system (RDBMS) operations, allowing for the efficient insertion of large volumes of data through a single, optimized operation. Enabling bulkLoad significantly improves performance during data import, migration, or batch processing by minimizing the overhead associated with individual row inserts and leveraging database-specific bulk insert mechanisms. This feature is particularly beneficial for initial data loads, large-scale migrations, or periodic batch updates where speed, resource efficiency, and reduced transaction time are critical. Bulk loading may temporarily alter database behavior—such as disabling indexes, constraints, or triggers—to maximize throughput, and often requires elevated permissions and careful management of transactional integrity to ensure data consistency. Proper use of bulkLoad can lead to substantial reductions in processing time and system resource consumption during large data operations.\n\n**Field behavior**\n- When set to true, the system uses optimized bulk loading techniques to insert data in large batches, greatly enhancing throughput and efficiency.\n- When false or omitted, data insertion defaults to standard row-by-row operations, which may be slower and consume more resources.\n- Typically enabled during scenarios involving initial data population, large batch imports, or data migration processes.\n- May temporarily disable or defer enforcement of indexes, constraints, and triggers to improve performance during the bulk load operation.\n- Can affect database locking and concurrency, potentially locking tables or partitions for the duration of the bulk load.\n- Bulk loading operations may bypass certain transactional controls, affecting rollback and error recovery behavior.\n\n**Implementation guidance**\n- Verify that the target RDBMS supports bulk loading and understand its specific syntax, capabilities, and limitations.\n- Assess transaction management implications, as bulk loading may alter or bypass triggers, constraints, and rollback mechanisms.\n- Implement robust error handling and post-load validation to ensure data integrity and consistency after bulk operations.\n- Monitor system resources such as CPU, memory, and I/O throughput during bulk load to avoid performance bottlenecks or outages.\n- Plan for potential impacts on database availability, locking behavior, and concurrent access during bulk load execution.\n- Ensure data is properly staged and preprocessed to meet the format and requirements of the bulk loading mechanism.\n- Coordinate bulk loading with maintenance windows or low-traffic periods to minimize disruption.\n\n**Examples**\n- `bulkLoad: true` — Enables bulk loading to accelerate insertion of large datasets efficiently.\n- `bulkLoad: false` — Disables bulk loading, performing inserts using standard row-by-row methods.\n- `bulkLoad` omitted — Defaults"},"updateLookupName":{"type":"string","description":"Specifies the exact name of the lookup table or entity within the relational database management system (RDBMS) that is targeted for update operations. This property serves as a precise identifier to determine which lookup data set should be modified, ensuring accurate and efficient targeting within the database schema. It typically corresponds to a physical table name or a logical entity name defined in the database and must strictly align with the existing schema to enable successful updates without errors or unintended side effects. Proper use of this property is critical for maintaining data integrity during update operations on lookup data, as it directly influences which records are affected. Accurate specification helps prevent accidental data corruption and supports clear, maintainable database interactions.\n\n**Field behavior**\n- Defines the specific lookup table or entity to be updated during an operation.\n- Acts as a key identifier for the update process to locate the correct data set.\n- Must be provided when performing update operations on lookup data.\n- Influences the scope and effect of the update by specifying the target entity.\n- Changes to this value directly affect which data is modified in the database.\n- Invalid or missing values will cause update operations to fail or target unintended data.\n\n**Implementation guidance**\n- Ensure the value exactly matches the existing lookup table or entity name in the database schema.\n- Validate against the RDBMS naming conventions, including allowed characters, length restrictions, and case sensitivity.\n- Maintain consistent naming conventions across the application to prevent ambiguity and errors.\n- Account for case sensitivity based on the underlying database system’s configuration and collation settings.\n- Avoid using reserved keywords, special characters, or whitespace that may cause conflicts or syntax errors.\n- Implement validation checks to catch misspellings or invalid names before executing update operations.\n- Incorporate error handling to manage cases where the specified lookup name does not exist or is inaccessible.\n\n**Examples**\n- \"country_codes\"\n- \"user_roles\"\n- \"product_categories\"\n- \"status_lookup\"\n- \"department_list\"\n\n**Important notes**\n- Providing an incorrect or misspelled name will result in failed update operations or runtime errors.\n- This field must not be left empty when an update operation is intended.\n- It is only relevant and required when performing update operations on lookup tables or entities.\n- Changes to this value should be carefully managed to avoid unintended data modifications or corruption.\n- Ensure appropriate permissions exist for updating the specified lookup entity to prevent authorization failures.\n- Consistent use of this property across environments (development, staging, production) is essential"},"updateExtract":{"type":"string","description":"Specifies the comprehensive configuration and parameters for updating an existing data extract within a relational database management system (RDBMS). This property defines how the extract's data is modified, supporting various update strategies such as incremental updates, full refreshes, conditional changes, or merges. It ensures that the extract remains accurate, consistent, and aligned with business rules and data integrity requirements by controlling the scope, method, and transactional behavior of update operations. Additionally, it allows for fine-grained control through filters, timestamps, and criteria to selectively update portions of the extract, while managing concurrency and maintaining data consistency throughout the process.\n\n**Field behavior**\n- Determines the update strategy applied to an existing data extract, including incremental, full refresh, conditional, or merge operations.\n- Controls how new data interacts with existing extract data—whether by overwriting, appending, or merging.\n- Supports selective updates using filters, timestamps, or conditional criteria to target specific subsets of data.\n- Manages transactional integrity to ensure updates are atomic, consistent, isolated, and durable (ACID-compliant).\n- Coordinates update execution to prevent conflicts, data corruption, or partial updates.\n- Enables configuration of error handling, logging, and rollback mechanisms during update processes.\n- Handles concurrency control to avoid race conditions and ensure data consistency in multi-user environments.\n- Allows scheduling and triggering of update operations based on time, events, or external signals.\n\n**Implementation guidance**\n- Ensure update configurations comply with organizational data governance, security, and integrity policies.\n- Validate all input parameters and update conditions to avoid inconsistent or partial data modifications.\n- Implement robust transactional support to allow rollback on failure and maintain data consistency.\n- Incorporate detailed logging and error reporting to facilitate monitoring, auditing, and troubleshooting.\n- Optimize update methods based on data volume, change frequency, and performance requirements, balancing between incremental and full refresh approaches.\n- Tailor update logic to leverage specific capabilities and constraints of the target RDBMS, including locking and concurrency controls.\n- Coordinate update timing and execution with downstream systems and data consumers to minimize disruption.\n- Design update processes to be idempotent where possible to support safe retries and recovery.\n- Consider the impact of update latency on data freshness and downstream analytics.\n\n**Examples**\n- Configuring an incremental update that uses a last-modified timestamp column to append only new or changed records to the extract.\n- Defining a full refresh update that completely replaces the existing extract data with a newly extracted dataset.\n- Setting up"},"ignoreLookupName":{"type":"string","description":"Specifies whether the lookup name should be ignored during relational database management system (RDBMS) operations, such as query generation, data retrieval, or relationship resolution. When enabled, this flag instructs the system to bypass the use of the lookup name, which can alter how relationships, joins, or references are resolved within database queries. This behavior is particularly useful in scenarios where the lookup name is redundant, introduces unnecessary complexity, or causes performance overhead. Additionally, it allows for alternative identification methods to be prioritized over the lookup name, enabling more flexible or optimized query strategies.\n\n**Field behavior**\n- When set to true, the system excludes the lookup name from all relevant database operations, including query construction, join conditions, and filtering criteria.\n- When false or omitted, the lookup name is actively utilized to resolve references, enforce relationships, and optimize data retrieval.\n- Determines whether explicit lookup names override or supplement default naming conventions, schema-based identifiers, or inferred relationships.\n- Affects how related data is fetched, potentially influencing join strategies, query plans, and lookup optimizations.\n- Impacts the generation of SQL or other query languages by controlling the inclusion of lookup name references.\n\n**Implementation guidance**\n- Enable this flag to enhance query performance by skipping unnecessary lookup name resolution when it is known to be non-essential or redundant.\n- Thoroughly evaluate the impact on data integrity and correctness to ensure that ignoring the lookup name does not result in incomplete, inaccurate, or inconsistent query results.\n- Validate all downstream processes, components, and integrations that depend on lookup names to prevent breaking dependencies or causing data inconsistencies.\n- Consider the underlying database schema design, naming conventions, and relationship mappings before enabling this flag to avoid unintended side effects.\n- Integrate this flag within query builders, ORM layers, data access modules, or middleware to conditionally include or exclude lookup names during query generation and execution.\n- Implement comprehensive testing and monitoring to detect any adverse effects on application behavior or data retrieval accuracy when this flag is toggled.\n\n**Examples**\n- `ignoreLookupName: true` — The system bypasses the lookup name, generating queries without referencing it, which may simplify query logic and improve execution speed.\n- `ignoreLookupName: false` — The lookup name is included in query logic, ensuring that relationships and references are resolved using the defined lookup identifiers.\n- Omission of the property defaults to `false`, meaning lookup names are considered and used unless explicitly ignored.\n\n**Important notes**\n- Ign"},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from the relational database management system (RDBMS) should be entirely skipped. When set to true, the system bypasses the data extraction step, which is particularly useful in scenarios where data extraction is managed externally, has already been completed, or is unnecessary for the current operation. This flag directly controls whether the extraction engine initiates the retrieval of data from the source database, thereby influencing all subsequent stages that depend on the extracted data, such as transformation, validation, and loading. Proper use of this flag ensures flexibility in workflows by allowing integration with external data pipelines or pre-extracted datasets without redundant extraction efforts.\n\n**Field behavior**\n- Determines if the data extraction phase from the RDBMS is executed or omitted.\n- When true, no data is pulled from the source database, effectively skipping extraction.\n- When false or omitted, the extraction process runs normally, retrieving data as configured.\n- Influences downstream processes such as data transformation, validation, and loading that depend on extracted data.\n- Must be explicitly set to true to skip extraction; otherwise, extraction proceeds by default.\n- Impacts the overall data pipeline flow by potentially altering the availability of fresh data.\n\n**Implementation guidance**\n- Default value should be false to ensure extraction occurs unless intentionally overridden.\n- Validate input strictly as a boolean to prevent misconfiguration.\n- Ensure that skipping extraction does not cause failures or data inconsistencies in subsequent pipeline stages.\n- Use this flag in workflows where extraction is handled outside the current system or when working with pre-extracted datasets.\n- Incorporate checks or safeguards to confirm that necessary data is available from alternative sources when extraction is skipped.\n- Log or notify when extraction is skipped to maintain transparency in data processing workflows.\n- Coordinate with other system components to handle scenarios where extraction is bypassed, ensuring smooth pipeline execution.\n\n**Examples**\n- `ignoreExtract: true` — Extraction step is completely bypassed.\n- `ignoreExtract: false` — Extraction step is performed as usual.\n- Field omitted — Defaults to false, so extraction occurs normally.\n- Used in a pipeline where data is pre-loaded from a file or external system, setting `ignoreExtract: true` to avoid redundant extraction.\n\n**Important notes**\n- Setting this flag to true assumes that the system has access to the required data through other means; otherwise, downstream processes may fail or produce incomplete results.\n- Incorrect use can lead to missing data, causing errors or inconsistencies in the data pipeline."}}},"S3":{"type":"object","description":"Configuration for S3 exports","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is physically located, representing the specific geographical area within AWS's global infrastructure that hosts the bucket. This designation directly affects data access latency, availability, redundancy, and compliance with regional data governance, privacy laws, and residency requirements. Selecting the appropriate region is critical for optimizing performance, minimizing costs related to data transfer and storage, and ensuring adherence to legal and organizational policies. The region must be specified using a valid AWS region identifier that accurately corresponds to the bucket's actual location to avoid connectivity issues, authentication failures, and improper routing of API requests. Proper region selection also influences disaster recovery strategies, service availability, and integration with other AWS services, making it a foundational parameter in S3 bucket configuration and management.\n\n**Field behavior**\n- Defines the physical and logical location of the S3 bucket within AWS's global network.\n- Directly impacts data access speed, latency, and throughput.\n- Determines compliance with regional data sovereignty, privacy, and security regulations.\n- Must be a valid AWS region code that matches the bucket’s deployment location.\n- Influences availability zones, fault tolerance, and disaster recovery planning.\n- Affects endpoint URL construction and API request routing.\n- Plays a role in cost optimization by affecting storage pricing and data transfer fees.\n- Governs integration capabilities with other AWS services available in the selected region.\n\n**Implementation guidance**\n- Use official AWS region codes such as \"us-east-1\", \"eu-west-2\", or \"ap-southeast-2\" as defined by AWS.\n- Validate the region value against the current list of supported AWS regions to prevent misconfiguration.\n- Ensure the specified region exactly matches the bucket’s physical location to avoid access and authentication errors.\n- Consider cost implications including storage pricing, data transfer fees, and cross-region replication expenses.\n- When migrating or replicating buckets across regions, update this property accordingly and plan for data transfer and downtime.\n- Verify service availability and feature support in the chosen region before deployment.\n- Regularly review AWS region updates and deprecations to maintain compatibility.\n- Use region-specific endpoints when constructing API requests to ensure proper routing.\n\n**Examples**\n- us-east-1 (Northern Virginia, USA)\n- eu-central-1 (Frankfurt, Germany)\n- ap-southeast-2 (Sydney, Australia)\n- sa-east-1 (São Paulo, Brazil)\n- ca-central-1 (Canada Central)\n\n**Important notes**\n- Incorrect or mismatched region specification"},"bucket":{"type":"string","description":"The name of the Amazon S3 bucket where data or objects are stored or will be stored. This bucket serves as the primary container within the AWS S3 service, uniquely identifying the storage location for your data across all AWS accounts globally. The bucket name must strictly comply with AWS naming conventions, including being DNS-compliant, entirely lowercase, and free of underscores, uppercase letters, or IP address-like formats, ensuring global uniqueness and compatibility. It is essential that the specified bucket already exists and that the user or service has the necessary permissions to access, upload, or modify its contents. Selecting the appropriate bucket, particularly with regard to its AWS region, can significantly impact application performance, cost efficiency, and adherence to data residency and regulatory requirements.\n\n**Field behavior**\n- Specifies the target S3 bucket for all data storage and retrieval operations.\n- Must reference an existing bucket with proper access permissions.\n- Acts as a globally unique identifier within the AWS S3 ecosystem.\n- Immutable after initial assignment; changing the bucket requires specifying a new bucket name.\n- Influences data locality, latency, cost, and compliance based on the bucket’s region.\n\n**Implementation guidance**\n- Validate bucket names against AWS S3 naming rules: lowercase letters, numbers, and hyphens only; no underscores, uppercase letters, or IP address formats.\n- Confirm the bucket’s existence and verify access permissions before performing operations.\n- Consider the bucket’s AWS region to optimize latency, cost, and compliance with data governance policies.\n- Implement comprehensive error handling for scenarios such as bucket not found, access denied, or insufficient permissions.\n- Align bucket selection with organizational policies and regulatory requirements concerning data storage and residency.\n\n**Examples**\n- \"my-app-data-bucket\"\n- \"user-uploads-2024\"\n- \"company-backups-eu-west-1\"\n\n**Important notes**\n- Bucket names are globally unique across all AWS accounts and cannot be reused or renamed once created.\n- Access control is managed through AWS IAM roles, policies, and bucket policies, which must be correctly configured.\n- The bucket’s region selection is critical for optimizing performance, minimizing costs, and ensuring legal compliance.\n- Bucket naming restrictions prohibit uppercase letters, underscores, and IP address-like formats to maintain DNS compliance.\n\n**Dependency chain**\n- Depends on the availability and stability of the AWS S3 service.\n- Requires valid AWS credentials with appropriate permissions to access or modify the bucket.\n- Typically used alongside object keys, region configurations,"},"fileKey":{"type":"string","description":"The unique identifier or path string that specifies the exact location of a file stored within an Amazon S3 bucket. This key acts as the primary reference for locating, accessing, managing, and manipulating a specific file within the storage system. It can be a simple filename or a hierarchical path that mimics folder structures to logically organize files within the bucket. The fileKey is case-sensitive and must be unique within the bucket to prevent conflicts or overwriting. It supports UTF-8 encoded characters and can include delimiters such as slashes (\"/\") to simulate directories, enabling structured file organization. Proper encoding and adherence to AWS naming conventions are essential to ensure seamless integration with S3 APIs and web requests. Additionally, the fileKey plays a critical role in access control, lifecycle policies, and event notifications within the S3 environment.\n\n**Field behavior**\n- Uniquely identifies a file within the S3 bucket namespace.\n- Serves as the primary reference in all file-related operations such as retrieval, update, deletion, and metadata management.\n- Case-sensitive and must be unique to avoid overwriting or conflicts.\n- Supports folder-like delimiters (e.g., slashes) to simulate directory structures for logical organization.\n- Used as a key parameter in all relevant S3 API operations, including GetObject, PutObject, DeleteObject, and CopyObject.\n- Influences access permissions and lifecycle policies when used with prefixes or specific key patterns.\n\n**Implementation guidance**\n- Use a UTF-8 encoded string that accurately reflects the file’s intended path or name.\n- Avoid unsupported or problematic characters; strictly follow S3 key naming conventions to prevent errors.\n- Incorporate logical folder structures using delimiters for better organization and maintainability.\n- Ensure URL encoding when the key is included in web requests or URLs to handle special characters properly.\n- Validate that the key length does not exceed S3’s maximum allowed size (up to 1024 bytes).\n- Consider the impact of key naming on access control policies, lifecycle rules, and event triggers.\n\n**Examples**\n- \"documents/report2024.pdf\"\n- \"images/profile/user123.png\"\n- \"backups/2023/12/backup.zip\"\n- \"videos/2024/events/conference.mp4\"\n- \"logs/2024/06/15/server.log\"\n\n**Important notes**\n- The fileKey is case-sensitive; for example, \"File.txt\" and \"file.txt\" are distinct keys.\n- Changing the fileKey"},"backupBucket":{"type":"string","description":"The name of the Amazon S3 bucket designated for securely storing backup data. This bucket serves as the primary repository for saving copies of critical information, ensuring data durability, integrity, and availability for recovery in the event of data loss, corruption, or disaster. It must be a valid, existing bucket within the AWS environment, configured with appropriate permissions and security settings to facilitate reliable and efficient backup operations. Proper configuration of this bucket—including enabling versioning to preserve historical backup states, applying encryption to protect data at rest, and setting lifecycle policies to manage storage costs and data retention—is essential to optimize backup management, maintain compliance with organizational and regulatory requirements, and control expenses. The bucket should ideally reside in the same AWS region as the backup source to minimize latency and data transfer costs, and may incorporate cross-region replication for enhanced disaster recovery capabilities. Additionally, the bucket should be monitored regularly for access patterns, storage usage, and security compliance to ensure ongoing protection and operational efficiency.\n\n**Field behavior**\n- Specifies the exact S3 bucket where backup data will be written and stored.\n- Must reference a valid, existing bucket within the AWS account and region.\n- Used exclusively during backup processes to save data snapshots or incremental backups.\n- Requires appropriate write permissions to allow backup services to upload data.\n- Typically remains static unless backup storage strategies or requirements change.\n- Supports integration with backup scheduling and monitoring systems.\n- Facilitates data recovery by maintaining backup copies in a secure, durable location.\n\n**Implementation guidance**\n- Verify that the bucket name adheres to AWS S3 naming conventions and is DNS-compliant.\n- Confirm the bucket exists and is accessible by the backup service or user.\n- Set up IAM roles and bucket policies to grant necessary write and read permissions securely.\n- Enable bucket versioning to maintain historical backup versions and support data recovery.\n- Implement lifecycle policies to manage storage costs by archiving or deleting old backups.\n- Apply encryption (e.g., AWS KMS) to protect backup data at rest and meet compliance requirements.\n- Consider enabling cross-region replication for disaster recovery scenarios.\n- Regularly audit bucket permissions and access logs to ensure security compliance.\n- Monitor storage usage and costs to optimize resource allocation and prevent unexpected charges.\n- Align bucket configuration with organizational policies for data retention, security, and compliance.\n\n**Examples**\n- \"my-app-backups\"\n- \"company-data-backup-2024\"\n- \"prod-environment-backup-bucket\"\n- \"s3-backups-region1"},"serverSideEncryptionType":{"type":"string","description":"serverSideEncryptionType specifies the type of server-side encryption applied to objects stored in the S3 bucket, determining how data at rest is securely protected by the storage service. This property defines the encryption algorithm or method used by the server to automatically encrypt data upon storage, ensuring confidentiality, integrity, and compliance with security standards. It accepts predefined values that correspond to supported encryption mechanisms, influencing how data is encrypted, accessed, and managed during storage and retrieval operations. By configuring this property, users can enforce encryption policies that safeguard sensitive information without requiring client-side encryption management, thereby simplifying security administration and enhancing data protection.\n\n**Field behavior**\n- Defines the encryption algorithm or method used by the server to encrypt data at rest.\n- Controls whether server-side encryption is enabled or disabled for stored objects.\n- Accepts specific predefined values corresponding to supported encryption types such as AES256 or aws:kms.\n- Influences data handling during storage and retrieval to maintain data confidentiality and integrity.\n- Applies encryption automatically without requiring client-side encryption management.\n- Determines compliance with organizational and regulatory encryption requirements.\n- Affects how encryption keys are managed, accessed, and audited.\n- Impacts access control and permissions related to encrypted objects.\n- May affect performance and cost depending on the encryption method chosen.\n\n**Implementation guidance**\n- Validate the input value against supported encryption types, primarily \"AES256\" and \"aws:kms\".\n- Ensure the selected encryption type is compatible with the S3 bucket’s configuration, policies, and permissions.\n- When using \"aws:kms\", specify the appropriate AWS KMS key ID if required, and verify key permissions.\n- Gracefully handle scenarios where encryption is not specified, disabled, or set to an empty value.\n- Provide clear and actionable error messages if an unsupported or invalid encryption type is supplied.\n- Consider the impact on performance and cost when enabling encryption, especially with KMS-managed keys.\n- Document encryption settings clearly for users to understand the security posture of stored data.\n- Implement fallback or default behaviors when encryption settings are omitted.\n- Ensure that encryption settings align with audit and compliance reporting requirements.\n- Coordinate with key management policies to maintain secure key lifecycle and rotation.\n- Test encryption settings thoroughly to confirm data is encrypted and accessible as expected.\n\n**Examples**\n- \"AES256\" — Server-side encryption using Amazon S3-managed encryption keys (SSE-S3).\n- \"aws:kms\" — Server-side encryption using AWS Key Management Service-managed keys (SSE-KMS"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Specifies the exact name of the function to be invoked within the wrapper context, enabling dynamic and flexible execution of different functionalities based on the provided value. This property acts as a critical key identifier that directs the wrapper to call a specific function, facilitating modular, reusable, and maintainable code design. The function name must correspond to a valid, accessible, and callable function within the wrapper's scope, ensuring that the intended operation is performed accurately and efficiently. By allowing the function to be selected at runtime, this mechanism supports dynamic behavior, adaptable workflows, and context-sensitive execution paths, enhancing the overall flexibility and scalability of the system.\n\n**Field behavior**\n- Determines which specific function the wrapper will execute during its operation.\n- Accepts a string value representing the exact name of the target function.\n- Must match a valid and accessible function within the wrapper’s execution context.\n- Influences the wrapper’s behavior and output based on the selected function.\n- Supports dynamic assignment to enable flexible and context-dependent function calls.\n- Enables runtime selection of functionality, allowing for versatile and adaptive processing.\n\n**Implementation guidance**\n- Validate that the provided function name exists and is callable within the wrapper environment before invocation.\n- Ensure the function name is supplied as a string and adheres to the naming conventions used in the wrapper’s codebase.\n- Implement robust error handling to gracefully manage cases where the specified function does not exist, is inaccessible, or fails during execution.\n- Sanitize the input to prevent injection attacks, code injection, or other security vulnerabilities.\n- Allow dynamic updates to this property to support runtime changes in function execution without requiring system restarts.\n- Log function invocation attempts and outcomes for auditing and debugging purposes.\n- Consider implementing a whitelist or registry of allowed function names to enhance security and control.\n\n**Examples**\n- \"initialize\"\n- \"processData\"\n- \"renderOutput\"\n- \"cleanupResources\"\n- \"fetchUserDetails\"\n- \"validateInput\"\n- \"exportReport\"\n- \"authenticateUser\"\n\n**Important notes**\n- The function name is case-sensitive and must precisely match the function identifier in the underlying implementation.\n- Misspelled or incorrect function names will cause invocation errors or runtime failures.\n- This property specifies only the function name; any parameters or arguments for the function should be provided separately.\n- The wrapper context must be properly initialized and configured to recognize and execute the specified function.\n- Changes to this property can significantly alter the behavior of the wrapper, so modifications should be managed carefully"},"configuration":{"type":"object","description":"The `configuration` property defines a comprehensive and centralized set of parameters, settings, and operational directives that govern the behavior, functionality, and characteristics of the wrapper component. It acts as the primary container for all customizable options that influence how the wrapper initializes, runs, and adapts to various deployment environments or user requirements. This includes environment-specific configurations, feature toggles, resource limits, integration endpoints, security credentials, logging preferences, and other critical controls. By encapsulating these diverse settings, the configuration property enables fine-grained control over the wrapper’s runtime behavior, supports dynamic adaptability, and facilitates consistent management across different scenarios.\n\n**Field behavior**\n- Encapsulates all relevant configurable options affecting the wrapper’s operation and lifecycle.\n- Supports complex nested structures or key-value pairs to represent hierarchical and interrelated settings.\n- Can be mandatory or optional depending on the wrapper’s design and operational context.\n- Changes to configuration typically influence initialization, runtime behavior, and potentially require component restart or reinitialization.\n- May support dynamic updates if the wrapper is designed for live reconfiguration without downtime.\n- Acts as a single source of truth for operational parameters, ensuring consistency and predictability.\n\n**Implementation guidance**\n- Organize configuration parameters logically, grouping related settings to improve readability and maintainability.\n- Implement robust validation mechanisms to enforce correct data types, value ranges, and adherence to constraints.\n- Provide sensible default values for optional parameters to enhance usability and prevent misconfiguration.\n- Design the configuration schema to be extensible and backward-compatible, allowing seamless future enhancements.\n- Secure sensitive information within the configuration using encryption, secure storage, or exclusion from logs and error messages.\n- Thoroughly document each configuration option, including its purpose, valid values, default behavior, and impact on the wrapper.\n- Consider environment-specific overrides or profiles to facilitate deployment flexibility.\n- Ensure configuration changes are atomic and consistent to avoid partial or invalid states.\n\n**Examples**\n- `{ \"timeout\": 5000, \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"environment\": \"production\", \"apiEndpoint\": \"https://api.example.com\", \"features\": { \"beta\": false } }`\n- Nested objects specifying database connection details, authentication credentials, UI themes, third-party service integrations, or resource limits.\n- Feature flags enabling or disabling experimental capabilities dynamically.\n- Security parameters such as OAuth tokens, encryption keys, or access control lists.\n\n**Important notes**\n- Comprehensive and clear documentation of all"},"lookups":{"type":"object","properties":{"type":{"type":"string","description":"type: >\n  Specifies the category or classification of the lookup within the wrapper context.\n  This property defines the nature or kind of lookup being performed or referenced.\n  **Field behavior**\n  - Determines the specific type of lookup operation or data classification.\n  - Influences how the lookup data is processed or interpreted.\n  - May restrict or enable certain values or options based on the type selected.\n  **Implementation guidance**\n  - Use clear and consistent naming conventions for different types.\n  - Validate the value against a predefined set of allowed types to ensure data integrity.\n  - Ensure that the type aligns with the corresponding lookup logic or data source.\n  **Examples**\n  - \"userRole\"\n  - \"productCategory\"\n  - \"statusCode\"\n  - \"regionCode\"\n  **Important notes**\n  - The type value is critical for correctly resolving and handling lookup data.\n  - Changing the type may affect downstream processing or data retrieval.\n  - Ensure compatibility with other related fields or components that depend on this type.\n  **Dependency chain**\n  - Depends on the wrapper context to provide scope.\n  - Influences the selection and retrieval of lookup values.\n  - May be linked to validation schemas or business logic modules.\n  **Technical details**\n  - Typically represented as a string.\n  - Should conform to a controlled vocabulary or enumeration where applicable.\n  - Case sensitivity may apply depending on implementation."},"_lookupCacheId":{"type":"string","format":"objectId","description":"_lookupCacheId: Identifier used to reference a specific cache entry for lookup operations within the wrapper's lookup mechanism.\n**Field behavior**\n- Serves as a unique identifier for cached lookup data.\n- Used to retrieve or update cached lookup results efficiently.\n- Helps in minimizing redundant lookup operations by reusing cached data.\n- Typically remains constant for the lifespan of the cached data entry.\n**Implementation guidance**\n- Should be generated to ensure uniqueness within the scope of the wrapper's lookups.\n- Must be consistent and stable to correctly reference the intended cache entry.\n- Should be invalidated or updated when the underlying lookup data changes.\n- Can be a string or numeric identifier depending on the caching system design.\n**Examples**\n- \"userRolesCache_v1\"\n- \"productLookupCache_2024_06\"\n- \"regionCodesCache_42\"\n**Important notes**\n- Proper management of _lookupCacheId is critical to avoid stale or incorrect lookup data.\n- Changing the _lookupCacheId without updating the cache content can lead to lookup failures.\n- Should be used in conjunction with cache expiration or invalidation strategies.\n**Dependency chain**\n- Depends on the wrapper.lookups system to manage cached lookup data.\n- May interact with cache storage or memory management components.\n- Influences lookup operation performance and data consistency.\n**Technical details**\n- Typically implemented as a string identifier.\n- May be used as a key in key-value cache stores.\n- Should be designed to avoid collisions with other cache identifiers.\n- May include versioning or timestamp information to manage cache lifecycle."},"extract":{"type":"string","description":"Extract: Specifies the extraction criteria or pattern used to retrieve specific data from a given input within the lookup process.\n**Field behavior**\n- Defines the rules or patterns to identify and extract relevant information from input data.\n- Can be a string, regular expression, or structured query depending on the implementation.\n- Used during the lookup operation to isolate the desired subset of data.\n- May support multiple extraction patterns or conditional extraction logic.\n**Implementation guidance**\n- Ensure the extraction pattern is precise to avoid incorrect or partial data retrieval.\n- Validate the extraction criteria against sample input data to confirm accuracy.\n- Support common pattern formats such as regex for flexible and powerful extraction.\n- Provide clear error handling if the extraction pattern fails or returns no results.\n- Allow for optional extraction parameters to customize behavior (e.g., case sensitivity).\n**Examples**\n- A regex pattern like `\"\\d{4}-\\d{2}-\\d{2}\"` to extract dates in YYYY-MM-DD format.\n- A JSONPath expression such as `$.store.book[*].author` to extract authors from a JSON object.\n- A simple substring like `\"ERROR:\"` to extract error messages from logs.\n- XPath query to extract nodes from XML data.\n**Important notes**\n- The extraction logic directly impacts the accuracy and relevance of the lookup results.\n- Complex extraction patterns may affect performance; optimize where possible.\n- Extraction should handle edge cases such as missing fields or unexpected data formats gracefully.\n- Document supported extraction pattern types and syntax clearly for users.\n**Dependency chain**\n- Depends on the input data format and structure to define appropriate extraction criteria.\n- Works in conjunction with the lookup mechanism that applies the extraction.\n- May interact with validation or transformation steps post-extraction.\n**Technical details**\n- Typically implemented using pattern matching libraries or query language parsers.\n- May require escaping or encoding special characters within extraction patterns.\n- Should support configurable options like multiline matching or case sensitivity.\n- Extraction results are usually returned as strings, arrays, or structured objects depending on context."},"map":{"type":"object","description":"map: >\n  A mapping object that defines key-value pairs used for lookup operations within the wrapper context.\n  **Field behavior**\n  - Serves as a dictionary or associative array for translating or mapping input keys to corresponding output values.\n  - Used to customize or override default lookup values dynamically.\n  - Supports string keys and values, but may also support other data types depending on implementation.\n  **Implementation guidance**\n  - Ensure keys are unique within the map to avoid conflicts.\n  - Validate that values conform to expected formats or types required by the lookup logic.\n  - Consider immutability or controlled updates to prevent unintended side effects during runtime.\n  - Provide clear error handling for missing or invalid keys during lookup operations.\n  **Examples**\n  - {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n  - {\"error404\": \"Not Found\", \"error500\": \"Internal Server Error\"}\n  - {\"env\": \"production\", \"version\": \"1.2.3\"}\n  **Important notes**\n  - The map is context-specific and should be populated according to the needs of the wrapper's lookup functionality.\n  - Large maps may impact performance; optimize size and access patterns accordingly.\n  - Changes to the map may require reinitialization or refresh of dependent components.\n  **Dependency chain**\n  - Depends on the wrapper.lookups context for proper integration.\n  - May be referenced by other properties or methods performing lookup operations.\n  **Technical details**\n  - Typically implemented as a JSON object or dictionary data structure.\n  - Keys and values are usually strings but can be extended to other serializable types.\n  - Should support efficient retrieval, ideally O(1) time complexity for lookups."},"default":{"type":["object","null"],"description":"The default property specifies the fallback or initial value to be used within the wrapper's lookups configuration when no other specific value is provided or applicable. This ensures that the system has a predefined baseline value to operate with, preventing errors or undefined behavior.\n\n**Field behavior**\n- Acts as the fallback value in the lookup process.\n- Used when no other matching lookup value is found.\n- Provides a baseline or initial value for the wrapper configuration.\n- Ensures consistent behavior by avoiding null or undefined states.\n\n**Implementation guidance**\n- Define a sensible and valid default value that aligns with the expected data type and usage context.\n- Ensure the default value is compatible with other dependent fields or processes.\n- Update the default value cautiously, as it impacts the fallback behavior system-wide.\n- Validate the default value during configuration loading to prevent runtime errors.\n\n**Examples**\n- A string value such as \"N/A\" or \"unknown\" for textual lookups.\n- A numeric value like 0 or -1 for numerical lookups.\n- A boolean value such as false when a true/false default is needed.\n- An object or dictionary representing a default configuration set.\n\n**Important notes**\n- The default value should be meaningful and appropriate to avoid misleading results.\n- Overriding the default value may affect downstream logic relying on fallback behavior.\n- Absence of a default value might lead to errors or unexpected behavior in the lookup process.\n- Consider localization or context-specific defaults if applicable.\n\n**Dependency chain**\n- Used by the wrapper.lookups mechanism during value resolution.\n- May influence or be influenced by other lookup-related properties or configurations.\n- Interacts with error handling or fallback logic in the system.\n\n**Technical details**\n- Data type should match the expected type of lookup values (string, number, boolean, object, etc.).\n- Stored and accessed as part of the wrapper's lookup configuration object.\n- Should be immutable during runtime unless explicitly reconfigured.\n- May be serialized/deserialized as part of configuration files or API payloads."},"allowFailures":{"type":"boolean","description":"allowFailures: Indicates whether the system should permit failures during the lookup process without aborting the entire operation.\n**Field behavior**\n- When set to true, individual lookup failures are tolerated, allowing the process to continue.\n- When set to false, any failure in the lookup process causes the entire operation to fail.\n- Helps in scenarios where partial results are acceptable or expected.\n**Implementation guidance**\n- Use this flag to control error handling behavior in lookup operations.\n- Ensure that downstream processes can handle partial or incomplete data if allowFailures is true.\n- Log or track failures when allowFailures is enabled to aid in debugging and monitoring.\n**Examples**\n- allowFailures: true — The system continues processing even if some lookups fail.\n- allowFailures: false — The system stops and reports an error immediately upon a lookup failure.\n**Important notes**\n- Enabling allowFailures may result in incomplete or partial data sets.\n- Use with caution in critical systems where data integrity is paramount.\n- Consider combining with retry mechanisms or fallback strategies.\n**Dependency chain**\n- Depends on the lookup operation within wrapper.lookups.\n- May affect error handling and result aggregation components.\n**Technical details**\n- Boolean value: true or false.\n- Default behavior should be defined explicitly to avoid ambiguity.\n- Should be checked before executing lookup calls to determine error handling flow."},"_id":{"type":"object","description":"Unique identifier for the lookup entry within the wrapper context.\n**Field behavior**\n- Serves as the primary key to uniquely identify each lookup entry.\n- Immutable once assigned to ensure consistent referencing.\n- Used to retrieve, update, or delete specific lookup entries.\n**Implementation guidance**\n- Should be generated using a globally unique identifier (e.g., UUID or ObjectId).\n- Must be indexed in the database for efficient querying.\n- Should be validated to ensure uniqueness within the wrapper.lookups collection.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"a1b2c3d4e5f6789012345678\"\n**Important notes**\n- This field is mandatory for every lookup entry.\n- Changing the _id after creation can lead to data inconsistency.\n- Should not contain sensitive or personally identifiable information.\n**Dependency chain**\n- Used by other fields or services that reference lookup entries.\n- May be linked to foreign keys or references in related collections or tables.\n**Technical details**\n- Typically stored as a string or ObjectId type depending on the database.\n- Must conform to the format and length constraints of the chosen identifier scheme.\n- Should be generated server-side to prevent collisions."},"cLocked":{"type":"object","description":"cLocked indicates whether the item is currently locked, preventing modifications or deletions.\n**Field behavior**\n- Represents the lock status of an item within the system.\n- When set to true, the item is considered locked and cannot be edited or deleted.\n- When set to false, the item is unlocked and available for modifications.\n- Typically used to enforce data integrity and prevent concurrent conflicting changes.\n**Implementation guidance**\n- Should be a boolean value: true or false.\n- Ensure that any operation attempting to modify or delete the item checks the cLocked status first.\n- Update the cLocked status appropriately when locking or unlocking the item.\n- Consider integrating with user permissions to control who can change the lock status.\n**Examples**\n- cLocked: true (The item is locked and cannot be changed.)\n- cLocked: false (The item is unlocked and editable.)\n**Important notes**\n- Locking an item does not necessarily mean it is read-only; it depends on the system's enforcement.\n- The lock status should be clearly communicated to users to avoid confusion.\n- Changes to cLocked should be logged for audit purposes.\n**Dependency chain**\n- May depend on user roles or permissions to set or unset the lock.\n- Other fields or operations may check cLocked before proceeding.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false unless specified otherwise.\n- Stored as part of the lookup or wrapper object in the data model.\n- Should be efficiently queryable to enforce lock checks during operations."}},"description":"A collection of key-value pairs that serve as a centralized reference dictionary to map or translate specific identifiers, codes, or keys into meaningful, human-readable, or contextually relevant values within the wrapper object. This property facilitates data normalization, standardization, and clarity across various components of the application by providing descriptive labels, metadata, or explanatory information associated with each key. It enhances data interpretation, validation, and presentation by enabling consistent and seamless conversion of coded data into understandable formats, supporting both simple and complex hierarchical mappings as needed.\n\n**Field behavior**\n- Functions as a lookup dictionary to resolve codes or keys into descriptive, user-friendly values.\n- Supports consistent data representation and interpretation throughout the application.\n- Commonly accessed during data processing, validation, or UI rendering to enhance readability.\n- May support hierarchical or nested mappings for complex relationships.\n- Typically remains static or changes infrequently but should reflect the latest valid mappings.\n\n**Implementation guidance**\n- Structure as an object, map, or dictionary with unique, well-defined keys and corresponding descriptive values.\n- Ensure keys are consistent, unambiguous, and conform to expected formats or standards.\n- Values should be concise, clear, and informative to facilitate easy understanding by end-users or systems.\n- Support nested or multi-level lookups if the domain requires hierarchical mapping.\n- Validate lookup data integrity regularly to prevent missing, outdated, or incorrect mappings.\n- Consider immutability or concurrency controls if the lookup data is accessed or modified concurrently.\n- Optimize for efficient retrieval, aiming for constant-time (O(1)) lookup performance.\n\n**Examples**\n- Mapping ISO country codes to full country names, e.g., {\"US\": \"United States\", \"FR\": \"France\"}.\n- Translating HTTP status codes to descriptive messages, e.g., {\"200\": \"OK\", \"404\": \"Not Found\"}.\n- Converting product or SKU identifiers to product names within an inventory management system.\n- Mapping error codes to user-friendly error descriptions.\n- Associating role IDs with role names in an access control system.\n\n**Important notes**\n- Keep the lookup collection current to accurately reflect any updates in codes or their meanings.\n- Avoid including sensitive, confidential, or personally identifiable information within lookup values.\n- Monitor and manage the size of the lookup collection to prevent performance degradation.\n- Ensure thread-safety or appropriate synchronization mechanisms if the lookups are mutable and accessed concurrently.\n- Changes to lookup data can have widespread effects on data interpretation and user interface behavior."}}},"Salesforce":{"type":"object","description":"Configuration for Salesforce imports containing operation type, API selection, and object-specific settings.\n\n**IMPORTANT:** This schema defines ONLY the Salesforce-specific configuration properties. Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level.\n\nWhen asked to generate this configuration, return an object with these properties:\n- `operation` (REQUIRED): The Salesforce operation type (insert, update, upsert, etc.)\n- `api` (REQUIRED): The Salesforce API to use (default: \"soap\")\n- `sObjectType`: The Salesforce object type (e.g., \"Account\", \"Contact\", \"Vendor__c\")\n- `idLookup`: Configuration for looking up existing records\n- `upsert`: Configuration for upsert operations (when operation is \"upsert\")\n- And other operation-specific properties as needed","properties":{"lookups":{"type":["string","null"],"description":"A collection of lookup tables or reference data sets that provide standardized, predefined values or options within the application context. These lookup tables enable consistent data entry, validation, and interpretation by defining controlled vocabularies or permissible values for various fields or parameters across the system. They serve as authoritative sources for reference data, ensuring uniformity and reducing errors in data handling and user interactions. Lookup tables typically encompass static or infrequently changing data that supports consistent business logic and user interface behavior across multiple modules or components. By centralizing these reference datasets, the system promotes reusability, simplifies maintenance, and enhances data integrity throughout the application lifecycle.\n\n**Field behavior**\n- Contains multiple named lookup tables, each representing a distinct category or domain of reference data.\n- Used to populate user interface elements such as dropdown menus, autocomplete fields, and selection lists.\n- Supports validation logic by restricting input to predefined permissible values.\n- Typically consists of static or infrequently changing data that underpins consistent data interpretation.\n- May be referenced by multiple components or modules within the application to maintain data consistency.\n- Enables dynamic adaptation of UI and business rules by centralizing reference data management.\n- Facilitates localization by supporting language-specific labels or descriptions where applicable.\n- Often versioned or managed to track changes and ensure backward compatibility.\n\n**Implementation guidance**\n- Structure as a dictionary or map where keys correspond to lookup table names and values are arrays or lists of unique, well-defined entries.\n- Ensure each lookup entry is unique within its table and includes necessary metadata such as codes, descriptions, or display labels.\n- Design for easy updates or extensions to lookup tables without compromising existing data integrity or system stability.\n- Implement caching strategies to optimize performance, especially for client-side consumption.\n- Consider supporting localization and internationalization by including language-specific labels or descriptions where applicable.\n- Incorporate versioning or change management mechanisms to handle updates without disrupting dependent systems.\n- Validate data integrity rigorously during ingestion or updates to prevent duplicates, inconsistencies, or invalid entries.\n- Separate lookup data from business logic to maintain clarity and ease of maintenance.\n- Provide clear documentation for each lookup table to facilitate understanding and correct usage by developers and users.\n- Ensure lookup data is accessible through standardized APIs or interfaces to promote interoperability.\n\n**Examples**\n- A \"countries\" lookup containing standardized country codes (e.g., ISO 3166-1 alpha-2) and their corresponding country names.\n- A \"statusCodes\" lookup defining permissible status values such as \""},"operation":{"type":"string","enum":["insert","update","upsert","upsertpicklistvalues","delete","addupdate"],"description":"Specifies the type of Salesforce operation to perform on records. This determines how data is created, modified, or removed in Salesforce.\n\n**Field behavior**\n- Defines the specific Salesforce action to execute (insert, update, upsert, etc.)\n- Dictates required fields and validation rules for the operation\n- Determines API endpoint and request structure\n- Automatically converted to lowercase before processing\n- Influences response format and error handling\n\n**Operation types**\n\n****insert****\n- **Purpose:** Create new records in Salesforce\n- **Behavior:** Creates new records\n- **Use when:** Creating brand new records\n- **With ignoreExisting: true:** Checks for existing records and skips them (requires idLookup.whereClause)\n- **Without ignoreExisting:** Creates all records without checking for duplicates\n- **Example:** \"Insert new leads from marketing campaign\"\n- **Example with ignoreExisting:** \"Create vendors while ignoring existing vendors\" → operation: \"insert\" + ignoreExisting: true\n\n****update****\n- **Purpose:** Modify existing records in Salesforce\n- **Behavior:** Requires record ID; fails if record doesn't exist\n- **Use when:** Updating known existing records\n- **Example:** \"Update customer status based on order completion\"\n\n****upsert****\n- **Purpose:** Create or update records based on external ID\n- **Behavior:** Updates if external ID matches, creates if not found\n- **Use when:** Syncing data from external systems with external ID field\n- **Example:** \"Upsert customers by External_ID__c field\"\n- **REQUIRES:**\n  - `idLookup.extract` field (maps incoming field to External ID)\n  - `upsert.externalIdField` field (specifies Salesforce External ID field)\n  - Both fields MUST be set for upsert to work\n\n****upsertpicklistvalues****\n- **Purpose:** Create or update picklist values dynamically\n- **Behavior:** Manages picklist value creation/updates\n- **Use when:** Maintaining picklist values from external data\n- **Example:** \"Sync product categories to Salesforce picklist\"\n\n****delete****\n- **Purpose:** Remove records from Salesforce\n- **Behavior:** Requires record ID; records moved to recycle bin\n- **Use when:** Removing obsolete or invalid records\n- **Example:** \"Delete expired opportunities\"\n\n****addupdate****\n- **Purpose:** Create new or update existing records (Celigo-specific)\n- **Behavior:** Similar to upsert but uses different matching logic\n- **Use when:** Upserting without requiring external ID field\n- **Example:** \"Sync contacts, create new or update existing by email\"\n\n**Implementation guidance**\n- Choose operation based on whether records are new, existing, or unknown\n- For **insert**: Ensure records are truly new to avoid errors\n- For **insert with ignoreExisting: true**: MUST set `idLookup.whereClause` to check for existing records\n- For **update/delete**: MUST set `idLookup.whereClause` to identify records\n- For **upsert**: MUST also include the `upsert` object with `externalIdField` property\n- For **upsert**: MUST also set `idLookup.extract` to map incoming field\n- For **addupdate**: MUST set `idLookup.whereClause` to define matching criteria\n- Use **upsertpicklistvalues** only for picklist management scenarios\n\n**Common patterns**\n- **\"Create X while ignoring existing X\"** → operation: \"insert\" + api: \"soap\" + idLookup.whereClause\n- **\"Create or update X\"** → operation: \"upsert\" + api: \"soap\" + upsert.externalIdField + idLookup.extract\n- **\"Update X\"** → operation: \"update\" + api: \"soap\" + idLookup.whereClause\n- **\"Sync X\"** → operation: \"addupdate\" + api: \"soap\" + idLookup.whereClause\n- **ALWAYS include** api field (default to \"soap\" if not specified in prompt)\n\n**Examples**\n\n**Insert new records**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Lead\"\n}\n```\nPrompt: \"Create new leads in Salesforce\"\n\n**Insert with ignore existing**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Vendor__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{VendorName}}'\"\n  }\n}\n```\nPrompt: \"Create vendor records while ignoring existing vendors\"\n\n**NOTE:** For \"ignore existing\" prompts, also remember that `ignoreExisting: true` belongs at a different level (not part of this salesforce configuration).\n\n**Update existing records**\n```json\n{\n  \"operation\": \"update\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Account\",\n  \"idLookup\": {\n    \"whereClause\": \"Id = '{{AccountId}}'\"\n  }\n}\n```\nPrompt: \"Update existing accounts in Salesforce\"\n\n**Upsert by external id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Contact\",\n  \"idLookup\": {\n    \"extract\": \"ExternalId\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nPrompt: \"Create or update contacts by External_ID__c\"\n\n**Addupdate (create or update)**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Customer__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{CustomerName}}'\"\n  }\n}\n```\nPrompt: \"Sync customers, create new or update existing\"\n\n**Important notes**\n- Operation value is case-insensitive (automatically converted to lowercase)\n- Invalid operation values will cause API validation errors\n- Each operation has different requirements for record identification\n- **CRITICAL for upsert**: MUST include the `upsert` object with `externalIdField` property\n- **CRITICAL for upsert**: MUST set `idLookup.extract` to map incoming field to External ID\n- **upsert** requires external ID field to be configured in Salesforce\n- **addupdate** is Celigo-specific and may not work with all Salesforce APIs\n- Operations must align with user's Salesforce permissions\n- Some operations support bulk processing more efficiently than others"},"api":{"type":"string","enum":["soap","rest","metadata","compositerecord"],"description":"**REQUIRED FIELD:** Specifies which Salesforce API to use for the import operation. Different APIs have different capabilities, performance characteristics, and use cases.\n\n**CRITICAL:** This field MUST ALWAYS be set. If no specific API is mentioned in the prompt, **DEFAULT to \"soap\"**.\n\n**Field behavior**\n- **REQUIRED** for all Salesforce imports\n- Determines the protocol and endpoint used for Salesforce communication\n- Influences request format, batch sizes, and response handling\n- Automatically converted to lowercase before processing\n- Affects available features and operation types\n- Impacts performance and error handling behavior\n\n**Api types**\n\n****soap** (Default - Most Common)**\n- **Purpose:** SOAP API with additional features like AllOrNone transaction control\n- **Best for:** Transactional imports requiring rollback capabilities\n- **Features:** AllOrNone header, enhanced transaction control, XML-based\n- **Limits:** Similar to REST but with more overhead\n- **Use when:** Need transaction rollback, legacy integrations\n- **Example:** \"Import invoices with all-or-none transaction guarantee\"\n\n        ****rest****\n- **Purpose:** Standard REST API for single-record and small-batch operations\n- **Best for:** Real-time imports, small datasets (< 200 records/call)\n- **Features:** Full CRUD operations, flexible querying, rich error messages\n- **Limits:** 2,000 records per API call\n- **Use when:** Standard import operations, real-time data sync\n- **Example:** \"Import leads from web form submissions\"\n\n****metadata****\n- **Purpose:** Metadata API for importing configuration and setup data\n- **Best for:** Deploying Salesforce configuration, not data records\n- **Features:** Deploy custom objects, fields, workflows, etc.\n- **Use when:** Deploying Salesforce configuration between orgs\n- **Example:** \"Deploy custom object definitions from non-production to production\"\n- **Note:** NOT for data records - only for metadata\n\n****compositerecord****\n- **Purpose:** Composite Graph API for related record trees\n- **Best for:** Creating multiple related records in one request\n- **Features:** Create parent-child relationships atomically\n- **Use when:** Importing hierarchical data (e.g., Account + Contacts + Opportunities)\n- **Example:** \"Import accounts with their related contacts in one operation\"\n\n**Implementation guidance**\n- **MUST ALWAYS SET THIS FIELD** - Never leave it undefined\n- **Default to \"rest\"** for most import operations (90% of use cases)\n- If prompt doesn't specify API type, use **\"soap\"**\n- Use **\"soap\"** only when AllOrNone transaction control is explicitly required\n- Use **\"metadata\"** only for configuration deployment, never for data\n- Use **\"compositerecord\"** for complex parent-child relationship imports\n- Consider API limits and performance characteristics for your data volume\n- Ensure the Salesforce connection supports the chosen API type\n\n**Default choice**\nWhen in doubt or when no API is mentioned in the prompt → **USE \"soap\"**\n\n**Examples**\n\n**Rest api (most common)**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"rest\",\n  \"sobject\": \"Lead\"\n}\n```\nPrompt: \"Import leads into Salesforce\"\n\n**Soap api with transaction control**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sobject\": \"Invoice__c\",\n  \"soap\": {\n    \"headers\": {\n      \"allOrNone\": true\n    }\n  }\n}\n```\nPrompt: \"Import invoices with all-or-none guarantee\"\n\n**Composite Record api**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"compositerecord\",\n  \"sobject\": \"Account\"\n}\n```\nPrompt: \"Import accounts with related contacts and opportunities\"\n\n**Important notes**\n- API value is case-insensitive (automatically converted to lowercase)\n- **REST API is the default and most commonly used**\n- SOAP API has slightly more overhead but provides transaction guarantees\n- Metadata API is for configuration only, not data records\n- Composite Record API requires careful relationship mapping\n- Each API has different rate limits and performance characteristics\n- Ensure your Salesforce connection and permissions support the chosen API"},"soap":{"type":"object","properties":{"headers":{"type":"object","properties":{"allOrNone":{"type":"boolean","description":"allOrNone specifies whether the Salesforce API operation should be executed atomically, ensuring that either all parts of the operation succeed or none do. When set to true, if any record in a batch operation fails, the entire transaction is rolled back and no changes are committed, preserving data integrity across the dataset. When set to false, the operation allows partial success, meaning some records may be processed and committed even if others fail, which can improve performance but requires careful error handling to manage partial failures. This property is primarily applicable to data manipulation operations such as create, update, delete, and upsert, and is critical for maintaining consistent and reliable data states in multi-record transactions.\n\n**Field behavior**\n- When true, enforces atomicity: all records in the batch must succeed or the entire operation is rolled back with no changes applied.\n- When false, permits partial success: successfully processed records are committed even if some records fail.\n- Applies mainly to batch DML operations including create, update, delete, and upsert.\n- Helps maintain data consistency by preventing partial updates when strict transactional integrity is required.\n- Influences how errors are reported and handled in the API response.\n- Affects transaction commit behavior and rollback mechanisms within the Salesforce platform.\n\n**Implementation guidance**\n- Set to true to guarantee strict data consistency and avoid partial data changes that could lead to inconsistent states.\n- Set to false to allow higher throughput and partial processing when some failures are acceptable or expected.\n- Default value is false if the property is omitted, allowing partial success by default.\n- When false, implement robust error handling to process partial success responses and handle individual record errors appropriately.\n- Verify that the target Salesforce API operation supports the allOrNone header before use.\n- Consider the impact on transaction duration and system resources when enabling atomic operations.\n- Use in scenarios where data integrity is paramount, such as financial or compliance-related data updates.\n\n**Examples**\n- allOrNone = true: Attempting to update 10 records where one record fails validation results in no records being updated, preserving atomicity.\n- allOrNone = false: Attempting to update 10 records where one record fails validation results in 9 records updated successfully and one failure reported.\n- allOrNone = true: Deleting multiple records in a batch will either delete all specified records or none if any deletion fails.\n- allOrNone = false: Creating multiple records where some violate unique constraints results in successful creation of valid records and"}},"description":"Headers to be included in the SOAP request sent to Salesforce, primarily used for authentication, session management, and transmitting additional metadata required by the Salesforce API. These headers form part of the SOAP envelope's header section and are essential for establishing and maintaining a valid session, specifying client options, and enabling various Salesforce API features. They can include standard headers such as SessionHeader for session identification, CallOptions for client-specific settings, and LoginScopeHeader for scoping login requests, as well as custom headers tailored to specific integration needs. Proper configuration of these headers ensures secure and successful communication with Salesforce services, enabling precise control over API interactions and session lifecycle.\n\n**Field behavior**\n- Defines the set of SOAP headers included in every request to the Salesforce API.\n- Facilitates passing of authentication credentials like session IDs or OAuth tokens.\n- Supports inclusion of metadata that controls API behavior or scopes requests.\n- Allows addition of custom headers required by specialized Salesforce integrations.\n- Headers persist across multiple API calls within the same session when set.\n- Modifies request context by influencing how Salesforce processes and authorizes API calls.\n- Enables fine-grained control over API call execution, such as setting query options or client identifiers.\n\n**Implementation guidance**\n- Construct headers as key-value pairs conforming to Salesforce’s SOAP header XML schema.\n- Include mandatory authentication headers such as SessionHeader with valid session IDs.\n- Validate all header values to prevent injection attacks or malformed requests.\n- Use headers to specify client information, API versioning, or organizational context as needed.\n- Ensure header names and structures strictly follow Salesforce SOAP API documentation.\n- Secure sensitive header data to prevent unauthorized access or leakage.\n- Update headers appropriately when session tokens expire or when switching contexts.\n- Test header configurations thoroughly to confirm compatibility with Salesforce API versions.\n- Consider using namespaces and proper XML serialization to maintain SOAP compliance.\n- Handle errors gracefully when headers are missing or invalid to improve debugging.\n\n**Examples**\n- `{ \"SessionHeader\": { \"sessionId\": \"00Dxx0000001gEREAY\" } }` — authenticates the session.\n- `{ \"CallOptions\": { \"client\": \"MyAppClient\" } }` — specifies client application details.\n- `{ \"LoginScopeHeader\": { \"organizationId\": \"00Dxx0000001gEREAY\" } }` — scopes login to a specific organization.\n- `{ \"CustomHeader\": { \"customField\": \"customValue\" } }` — example of a"},"batchSize":{"type":"number","description":"The number of records to be processed in each batch during Salesforce SOAP API operations. This property controls the size of data chunks sent or retrieved in a single API call, optimizing performance and resource utilization by balancing throughput and system resource consumption. Adjusting the batch size directly impacts the number of API calls made, the processing speed, memory usage, and network load, enabling fine-tuning of data operations to suit specific environment constraints and performance goals. Proper configuration of batchSize helps prevent API throttling, reduces the risk of timeouts, and ensures efficient handling of large datasets by segmenting data into manageable portions.\n\n**Field behavior**\n- Determines the count of records included in each batch request to the Salesforce SOAP API.\n- Directly influences the total number of API calls required when handling large datasets.\n- Affects processing throughput, latency, and memory usage during data operations.\n- Can be dynamically adjusted to optimize performance based on system capabilities and network conditions.\n- Impacts error handling and retry mechanisms by controlling batch granularity.\n- Influences network load and the likelihood of encountering API limits or timeouts.\n\n**Implementation guidance**\n- Choose a batch size that adheres to Salesforce API limits and recommended best practices.\n- Balance between larger batch sizes (which reduce API calls but increase memory and processing load) and smaller batch sizes (which increase API calls but reduce resource consumption).\n- Ensure the batchSize value is a positive integer within the allowed range for the specific Salesforce operation.\n- Continuously monitor API response times, error rates, and resource utilization to fine-tune the batch size for optimal performance.\n- Default to Salesforce-recommended batch sizes when uncertain or when starting configuration.\n- Consider network stability and latency when selecting batch size to minimize timeouts and failures.\n- Validate batch size compatibility with other integration components and middleware.\n- Test batch size settings in a staging environment before deploying to production to avoid unexpected failures.\n\n**Examples**\n- `batchSize: 200` — Processes 200 records per batch, a common default for many Salesforce operations.\n- `batchSize: 500` — Processes 500 records per batch, often the maximum allowed for certain Salesforce SOAP API calls.\n- `batchSize: 50` — Processes smaller batches suitable for environments with limited memory or network bandwidth.\n- `batchSize: 100` — A moderate batch size balancing throughput and resource usage in typical scenarios.\n\n**Important notes**\n- Salesforce SOAP API enforces maximum batch size limits (commonly 200"}},"description":"Specifies the comprehensive configuration settings required for integrating with the Salesforce SOAP API. This property includes all essential parameters to establish, authenticate, and manage SOAP-based communication with Salesforce services, enabling a wide range of operations such as creating, updating, deleting, querying, and executing actions on Salesforce objects. It encompasses detailed configurations for endpoint URLs, authentication credentials (such as session IDs or OAuth tokens), SOAP headers, session management details, timeout settings, and error handling mechanisms to ensure reliable, secure, and efficient interactions. The configuration must strictly adhere to Salesforce's WSDL definitions and security standards, supporting robust SOAP API usage within the application environment.\n\n**Field behavior**\n- Defines all connection, authentication, and operational parameters necessary for Salesforce SOAP API interactions.\n- Facilitates the construction and transmission of properly structured SOAP requests and the parsing of SOAP responses.\n- Manages the session lifecycle, including token acquisition, refresh, and timeout handling to maintain persistent connectivity.\n- Supports comprehensive error detection and handling of SOAP faults, API exceptions, and network issues to maintain integration stability.\n- Enables customization of SOAP headers, namespaces, and envelope structures as required by specific Salesforce API calls.\n- Allows configuration of retry policies and backoff strategies in response to transient errors or rate limiting.\n- Supports environment-specific configurations, such as production, non-production, or custom Salesforce domains.\n\n**Implementation guidance**\n- Configure endpoint URLs and authentication credentials accurately based on the Salesforce environment (production, non-production, or custom domains).\n- Validate SOAP envelope structure, namespaces, and compliance with the latest Salesforce WSDL specifications to prevent communication errors.\n- Implement robust error handling to capture, log, and respond appropriately to SOAP faults, API limit exceptions, and network failures.\n- Securely store and transmit sensitive information such as session IDs, OAuth tokens, passwords, and certificates, following best security practices.\n- Monitor Salesforce API usage limits and implement throttling or queuing mechanisms to avoid exceeding quotas and service disruptions.\n- Set timeout values thoughtfully to balance between preventing hanging requests and allowing sufficient time for valid operations to complete.\n- Regularly update the SOAP configuration to align with Salesforce API version changes, WSDL updates, and evolving security requirements.\n- Incorporate logging and monitoring to track SOAP request and response cycles for troubleshooting and performance optimization.\n- Ensure compatibility with Salesforce security protocols, including TLS versions and encryption standards.\n\n**Examples**\n- Specifying the Salesforce SOAP API endpoint URL along with a valid session ID or OAuth token for authentication.\n- Defining custom SOAP headers"},"sObjectType":{"type":"string","description":"sObjectType specifies the exact Salesforce object type that the API operation will target, such as standard objects like Account, Contact, Opportunity, or custom objects like CustomObject__c. This property is essential for defining the schema context of the API request, determining which fields are relevant, and directing the operation to the correct Salesforce data entity. It plays a critical role in data validation, processing logic, and endpoint routing within the Salesforce environment, ensuring that operations such as create, update, delete, or query are executed against the intended object type with precision. Accurate specification of sObjectType enables dynamic adjustment of request payloads and response handling based on the object’s schema and enforces compliance with Salesforce’s API naming conventions and permissions model.  \n**Field behavior:**  \n- Identifies the specific Salesforce object (sObject) type targeted by the API operation.  \n- Determines the applicable schema, field availability, and validation rules based on the selected object.  \n- Must exactly match a valid Salesforce object API name recognized by the connected Salesforce instance, including case sensitivity.  \n- Routes the API request to the appropriate Salesforce object endpoint for the intended operation.  \n- Influences dynamic field mapping, data transformation, and response parsing processes.  \n- Supports both standard and custom Salesforce objects, with custom objects requiring the “__c” suffix.  \n**Implementation guidance:**  \n- Validate sObjectType values against the current Salesforce object metadata in the target environment to prevent invalid requests.  \n- Enforce exact case-sensitive matching with Salesforce API object names (e.g., \"Account\" not \"account\").  \n- Support and recognize both standard objects and custom objects, ensuring custom objects include the “__c” suffix.  \n- Implement robust error handling for invalid, unsupported, or deprecated sObjectType values to provide clear feedback.  \n- Regularly update the list of allowed sObjectType values to reflect changes in the Salesforce schema or environment.  \n- Use sObjectType to dynamically tailor API request payloads, field selections, and response parsing logic.  \n- Consider object-level permissions and authentication scopes when processing requests involving specific sObjectTypes.  \n**Examples:**  \n- \"Account\"  \n- \"Contact\"  \n- \"Opportunity\"  \n- \"Lead\"  \n- \"CustomObject__c\"  \n- \"Case\"  \n- \"Campaign\"  \n**Important notes:**  \n- Custom Salesforce objects must be referenced using their exact API names, including the “__c” suffix.  \n-"},"idLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Please specify which field from your export data should map to External ID. It is very important to pick a field that is both unique, and guaranteed to always have a value when exported (otherwise the Salesforce Upsert operation will fail). If sample data is available then a select list should be displayed here to help you find the right field from your export data. If sample data is not available then you will need to manually type the field id that should map to External ID.\n\n**Field behavior**\n- Specifies which field from the incoming data contains the External ID value\n- **Used for upsert operations ONLY**\n- Maps the incoming data field to the Salesforce External ID field specified in `upsert.externalIdField`\n- Must reference a field that exists in the incoming data payload\n- The field value must be unique and always populated to ensure reliable matching\n- Generates field mapping at runtime to match incoming records with Salesforce records\n\n**When to use**\n- **upsert operations ONLY** — REQUIRED when operation is \"upsert\"\n- Works together with `upsert.externalIdField` to enable External ID matching\n\n**When not to use**\n- **update** — use `whereClause` instead\n- **delete** — use `whereClause` instead\n- **addupdate** — use `whereClause` instead\n- **insert with ignoreExisting** — use `whereClause` instead\n\n**Implementation guidance**\n- Choose a field from incoming data that contains unique, non-null values\n- Ensure the field is always populated in the incoming records to prevent operation failures\n- This field's value will be matched against the Salesforce External ID field specified in `upsert.externalIdField`\n- Use the exact field name as it appears in the incoming data (case-sensitive)\n- If sample data is available, validate that the field exists and has appropriate values\n- Test with sample data to ensure matching works correctly before production deployment\n\n**Example**\n\n**Upsert by External id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nMaps incoming field \"CustomerNumber\" → Salesforce field \"External_ID__c\"\n\n**How it works:**\n- Incoming record has `{\"CustomerNumber\": \"CUST-12345\", \"Name\": \"John\"}`\n- System extracts \"CUST-12345\" from CustomerNumber\n- Searches Salesforce External_ID__c field for \"CUST-12345\"\n- If found: updates the existing record\n- If not found: creates a new record with External_ID__c = \"CUST-12345\"\n\n**Common field names**\n- \"ExternalId\" — incoming field containing external identifier\n- \"CustomerId\" — incoming field with unique customer number\n- \"AccountNumber\" — incoming field with account number\n- \"SourceSystemId\" — incoming field from source system\n\n**Important notes**\n- **ONLY use for upsert operations**\n- For all other operations (update, delete, addupdate, insert with ignoreExisting), use `whereClause` instead\n- Field must exist in incoming data and be consistently populated\n- Works with `upsert.externalIdField` to enable External ID matching\n- Missing or null values will cause the upsert operation to fail\n- Field name is case-sensitive and must match exactly\n- If both `extract` and `whereClause` are set, `extract` takes precedence for upsert"},"whereClause":{"type":"string","description":"To determine if a record exists in Salesforce, define a SOQL query WHERE clause for the sObject type selected above. For example, if you are importing contact records with a unique email, your WHERE clause should identify contacts with the same email.\n\nHandlebars statements are allowed, and more complex WHERE clauses can contain AND and OR logic. For example, when the product name alone is not unique, you can define a WHERE clause to find any records where the product name exists AND the product also belongs to a specific product family, resulting in a unique product record that matches both criteria.\n\nTo find string values that contain spaces, wrap them in single quotes, for example:\n\nName = 'John Smith'\n\n**Field behavior**\n- Specifies a SOQL conditional expression to match existing records in Salesforce\n- Omit the \"WHERE\" keyword - only provide the condition expression\n- Supports Handlebars {{fieldName}} syntax to reference incoming data fields\n- Supports SOQL operators: =, !=, >, <, >=, <=, LIKE, IN, NOT IN\n- Supports logical operators: AND, OR, NOT\n- Can use parentheses for grouping complex conditions\n- The clause is evaluated for each incoming record to find matching Salesforce records\n\n**When to use (REQUIRED FOR)**\n- **update** — REQUIRED to identify which records to update\n- **delete** — REQUIRED to identify which records to delete\n- **addupdate** — REQUIRED to identify existing records (update if found, insert if not)\n- **insert with ignoreExisting: true** — REQUIRED to check if records exist (skip if found)\n\n**When not to use**\n- **upsert** — DO NOT use; use `extract` instead\n- **insert without ignoreExisting** — Not needed for simple inserts\n\n**Implementation guidance**\n- Write the clause following SOQL syntax rules, omitting the \"WHERE\" keyword\n- Use {{fieldName}} syntax to reference values from incoming records\n- Wrap string values containing spaces in single quotes: `Name = '{{FullName}}'`\n- Numeric and boolean values don't need quotes: `Age = {{Age}}`\n- Use indexed fields (Email, External ID) for better performance\n- Keep the clause selective to minimize records returned\n- Validate syntax before deployment to prevent runtime errors\n\n**Examples**\n\n**Update by Email**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nFinds records with matching Email for update\n\n**Insert with ignore existing by Email**\n```json\n{\n  \"operation\": \"insert\",\n  \"ignoreExisting\": true,\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nChecks if a record with matching Email exists; if found, skips creation\n\n**Addupdate (create or update) by Email**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nUpdates if Email matches, creates new record if not found\n\n**Delete by Account Number**\n```json\n{\n  \"operation\": \"delete\",\n  \"idLookup\": {\n    \"whereClause\": \"AccountNumber = '{{AcctNum}}'\"\n  }\n}\n```\nFinds records with matching AccountNumber for deletion\n\n**Complex and logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{ProductName}}' AND ProductFamily__c = '{{Family}}'\"\n  }\n}\n```\nMatches records where both Name AND ProductFamily match\n\n**Complex or logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}' OR Phone = '{{Phone}}'\"\n  }\n}\n```\nMatches records where Email OR Phone matches\n\n**String with spaces**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{FullName}}'\"\n  }\n}\n```\nNote: Single quotes around {{FullName}} handle names with spaces like \"John Smith\"\n\n**Important notes**\n- **REQUIRED** for update, delete, addupdate, and insert (with ignoreExisting)\n- DO NOT use for upsert operations - use `extract` instead\n- Use {{fieldName}} to reference incoming data fields\n- String values with spaces must be wrapped in single quotes\n- Numeric and boolean values don't need quotes\n- Poor clause design can impact performance and hit governor limits\n- Always test in a non-production environment before production deployment"}},"description":"Configuration for looking up existing Salesforce records. **This object should ONLY contain either `extract` OR `whereClause` - NOT both, and NOT operation or other properties.**\n\n**What this object contains**\nThis object should contain ONLY ONE of these properties:\n- `extract` — for upsert operations ONLY\n- `whereClause` — for update, delete, addupdate, and insert (with ignoreExisting)\n\n**DO NOT include operation, upsert, or any other properties here** - those belong at the parent salesforce level.\n\n**Field behavior**\n- Contains two strategies for finding existing records: `extract` and `whereClause`\n- **`extract`**: Used ONLY for upsert operations\n- **`whereClause`**: Used for update, delete, addupdate, and insert (with ignoreExisting)\n- Generates field mapping or SOQL queries at runtime to match records\n- Critical for preventing duplicate record creation and enabling record updates\n\n**When to use each field**\n\n**Use `extract` (upsert ONLY)**\n- **upsert** — REQUIRED: Maps incoming field to External ID field\n\n**Use `whereClause` (all other operations)**\n- **update** — REQUIRED: Identifies records to update via SOQL query\n- **delete** — REQUIRED: Identifies records to delete via SOQL query\n- **addupdate** — REQUIRED: Identifies existing records via SOQL query\n- **insert with ignoreExisting: true** — REQUIRED: Checks if records exist via SOQL query\n\n**Leave empty**\n- **insert without ignoreExisting** — No lookup needed for simple inserts\n\n**What to set (idLookup object content)**\n\n**For upsert**\n```json\n{\n  \"extract\": \"CustomerNumber\"\n}\n```\n\n**For update/delete/addupdate**\n```json\n{\n  \"whereClause\": \"Email = '{{Email}}'\"\n}\n```\n\n**Parent context examples (for reference)**\nThese show the complete parent-level structure. **idLookup itself should only contain extract or whereClause.**\n\n**Upsert (parent context)**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\n\n**Update (parent context)**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\n\n**Implementation guidance**\n- Use `extract` ONLY for upsert operations (with `upsert.externalIdField`)\n- Use `whereClause` for update, delete, addupdate, and insert with ignoreExisting\n- Do not set both `extract` and `whereClause` for the same operation\n- **ONLY set extract or whereClause - do not nest operation or other properties**\n- Ensure the lookup strategy matches the operation type\n- Test thoroughly in a non-production environment before production deployment\n\n**Important notes**\n- **This object ONLY contains `extract` or `whereClause`**\n- **`extract`** is for upsert ONLY - maps to External ID field\n- **`whereClause`** is for all other operations requiring lookups\n- Do NOT nest `operation`, `upsert`, or other properties here - those belong at the parent salesforce level\n- For upsert: `extract` + `upsert.externalIdField` work together (both at salesforce level)\n- For other operations: `whereClause` enables SOQL-based record matching\n- Incorrect configuration can cause duplicates or failed operations"},"upsert":{"type":"object","properties":{"externalIdField":{"type":"string","description":"Specifies the API name of the External ID field in Salesforce that will be used to match records during an upsert operation. This is the TARGET field in Salesforce, while `idLookup.extract` specifies the SOURCE field from incoming data.\n\n**Field behavior**\n- Identifies which Salesforce field serves as the external ID for record matching\n- Must be a field marked as \"External ID\" in Salesforce object configuration\n- Works together with `idLookup.extract` to enable upsert matching\n- The field must be indexed and unique in Salesforce\n- Supports text, number, and email field types configured as external IDs\n- Required when `operation: \"upsert\"`\n\n**How it works with idLookup.extract**\n- **`idLookup.extract`**: Specifies which incoming data field contains the matching value (e.g., \"CustomerNumber\")\n- **`externalIdField`**: Specifies which Salesforce field to match against (e.g., \"External_ID__c\")\n- Together they create the match: incoming \"CustomerNumber\" → Salesforce \"External_ID__c\"\n\n**Implementation guidance**\n- Use the exact Salesforce API name of the field (case-sensitive)\n- Verify the field is marked as \"External ID\" in Salesforce object settings\n- Ensure the field is indexed for optimal performance\n- Confirm the integration user has read/write access to this field\n- The field must have unique values to prevent ambiguous matches\n- Test in a non-production environment before production to verify configuration\n\n**Examples**\n\n**Standard External id field**\n```\n\"AccountNumber\"\n```\nStandard Account field marked as External ID\n\n**Custom External id field**\n```\n\"Custom_External_Id__c\"\n```\nCustom field marked as External ID\n\n**Email as External id**\n```\n\"Email\"\n```\nEmail field configured as External ID on Contact\n\n**NetSuite id as External id**\n```\n\"netsuite_conn__NetSuite_Id__c\"\n```\nNetSuite connector External ID field\n\n**How it works (Parent Context)**\nAt the parent `salesforce` level, the complete configuration looks like:\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nWhere:\n- `operation: \"upsert\"` — at salesforce level\n- `idLookup.extract: \"CustomerNumber\"` — incoming field, at salesforce level\n- `upsert.externalIdField: \"External_ID__c\"` — **THIS field**, inside upsert object\n\n**Common external id fields**\n- \"AccountNumber\" — standard Account field\n- \"Email\" — when configured as External ID on Contact\n- \"Custom_External_Id__c\" — custom field on any object\n- \"External_ID__c\" — common naming convention\n- \"Source_System_Id__c\" — for multi-system integrations\n\n**Important notes**\n- Field must be explicitly marked as \"External ID\" in Salesforce\n- Not all fields can be External IDs (must be text, number, or email type)\n- Missing or null values in this field will cause upsert to fail\n- If multiple records match the External ID, upsert will fail\n- Always use with `idLookup.extract` for upsert operations"}},"description":"Configuration for Salesforce upsert operations. This object contains ONLY the `externalIdField` property.\n\n**CRITICAL: This component is REQUIRED when `operation: \"upsert\"`. Always include this component for upsert operations.**\n\n**Field behavior**\n- **REQUIRED** when `operation: \"upsert\"`\n- Contains configuration specific to upsert behavior\n- **ONLY contains ONE property: `externalIdField`**\n- Works in conjunction with `idLookup.extract` (which is at the SAME level as this upsert object, NOT nested within it)\n\n**When to use**\n- **REQUIRED** when `operation` is \"upsert\" - MUST be included\n- Leave undefined for insert, update, delete, or addupdate operations\n- Without this object, upsert operations will fail\n\n**Why IT'S required**\n- Upsert operations need to know which Salesforce field to use for matching\n- The `externalIdField` property specifies the Salesforce External ID field\n- This works with `idLookup.extract` to enable record matching\n- Missing this object will cause the import to fail\n\n**Decision**\n**Return true/include this component if `operation: \"upsert\"` is set. This is REQUIRED for all upsert operations.**\n\n**What to set**\n**This object should ONLY contain:**\n```json\n{\n  \"externalIdField\": \"External_ID__c\"\n}\n```\n\n**DO NOT include operation, idLookup, or any other properties here** - those belong at the parent salesforce level.\n\n**Parent context (for reference only)**\nAt the parent `salesforce` level, you will also set:\n- `operation: \"upsert\"` (at salesforce level)\n- `idLookup.extract` (at salesforce level)\n- `upsert.externalIdField` (THIS object)\n\n**Implementation guidance**\n- Always set `externalIdField` when using upsert\n- Ensure the specified External ID field exists in Salesforce\n- Verify the field is marked as \"External ID\" in Salesforce\n- **ONLY set the externalIdField property - nothing else**\n\n**Important notes**\n- **This object ONLY contains `externalIdField`**\n- Do NOT nest `operation`, `idLookup`, or other properties here\n- Those properties belong at the parent salesforce level, not nested in this upsert object\n- May be extended with additional upsert-specific options in the future\n- Only relevant when operation is \"upsert\""},"upsertpicklistvalues":{"type":"object","properties":{"type":{"type":"string","enum":["picklist","multipicklist"],"description":"Specifies the type of picklist field being managed. Salesforce supports single-select picklists and multi-select picklists.\n\n**Field behavior**\n- Determines whether the picklist allows single or multiple value selection\n- Affects UI rendering and validation in Salesforce\n- Automatically converted to lowercase before processing\n\n**Picklist types**\n\n****picklist****\n- Standard single-select picklist\n- User can select only one value at a time\n- Most common picklist type\n\n****multipicklist****\n- Multi-select picklist\n- User can select multiple values (separated by semicolons)\n- Use for categories, tags, or multi-value selections\n\n**Examples**\n- \"picklist\" — for Status, Priority, Category fields\n- \"multipicklist\" — for Tags, Skills, Interests fields"},"fullName":{"type":"string","description":"The API name of the picklist field in Salesforce, including the object name.\n\n**Field behavior**\n- Fully qualified field name: ObjectName.FieldName__c\n- Must match exact Salesforce API name\n- Case-sensitive\n- Required for picklist value upsert operations\n\n**Examples**\n- \"Account.MyPicklist__c\"\n- \"Contact.Status__c\"\n- \"CustomObject__c.Category__c\""},"label":{"type":"string","description":"The display label for the picklist field in Salesforce UI.\n\n**Field behavior**\n- Human-readable label shown in Salesforce interface\n- Can contain spaces and special characters\n- Does not need to match API name\n\n**Examples**\n- \"My Picklist\"\n- \"Product Category\"\n- \"Customer Status\""},"visibleLines":{"type":"number","description":"For multi-select picklists, specifies how many lines are visible in the UI without scrolling.\n\n**Field behavior**\n- Only applies to multipicklist fields\n- Controls height of the picklist selection box\n- Default is typically 4-5 lines\n\n**Examples**\n- 3 — shows 3 values at once\n- 5 — shows 5 values at once\n- 10 — shows more values for large lists"}},"description":"Configuration for managing Salesforce picklist field metadata when using the `upsertpicklistvalues` operation. This enables dynamic creation or updates to picklist field definitions.\n\n**Field behavior**\n- Used ONLY when `operation: \"upsertpicklistvalues\"`\n- Manages picklist field metadata, not picklist values themselves\n- Enables programmatic picklist field management\n- Requires appropriate Salesforce metadata permissions\n\n**When to use**\n- When operation is set to \"upsertpicklistvalues\"\n- For managing picklist field definitions programmatically\n- When syncing picklist configurations between orgs\n- For automated picklist field deployment\n\n**Important notes**\n- This manages the FIELD itself, not the individual picklist VALUES\n- Requires Salesforce API and Metadata API permissions\n- Should only be set when operation is \"upsertpicklistvalues\"\n- Most imports won't use this - it's for metadata management"},"removeNonSubmittableFields":{"type":"boolean","description":"Indicates whether fields that are not eligible for submission to Salesforce—such as read-only, system-generated, formula, or otherwise restricted fields—should be automatically excluded from the data payload before it is sent. This feature helps prevent submission errors by ensuring that only valid, submittable fields are included in the API request, thereby improving data integrity and reducing the likelihood of operation failures. It operates exclusively on the outgoing submission payload without modifying the original source data, making it a safe and effective way to sanitize data before interaction with Salesforce APIs.\n\n**Field behavior**\n- When enabled (set to true), the system analyzes the outgoing data payload and removes any fields identified as non-submittable based on Salesforce metadata, field-level security, and API constraints.\n- When disabled (set to false) or omitted, all fields present in the payload are submitted as-is, which may lead to errors if restricted, read-only, or system-managed fields are included.\n- Enhances data integrity by proactively filtering out fields that could cause rejection or failure during Salesforce data operations.\n- Applies only to the outgoing submission payload, ensuring the original source data remains unchanged and intact.\n- Dynamically adapts to different Salesforce objects, API versions, and user permissions by referencing up-to-date metadata and security settings.\n\n**Implementation guidance**\n- Enable this flag in integration scenarios where the data schema may include fields that Salesforce does not accept for the target object or API version.\n- Maintain an up-to-date cache or real-time access to Salesforce metadata to accurately reflect field-level permissions, submittability, and API changes.\n- Implement logging or reporting mechanisms to track which fields are removed, aiding in auditing, troubleshooting, and transparency.\n- Use in conjunction with other data validation, transformation, and permission-checking steps to ensure clean, compliant data submissions.\n- Thoroughly test in development and staging environments to understand the impact on data flows, error handling, and downstream processes.\n- Consider user roles and permissions, as submittability may vary depending on the authenticated user's access rights and profile settings.\n\n**Examples**\n- `removeNonSubmittableFields: true` — Automatically removes read-only or system-managed fields such as CreatedDate, LastModifiedById, or formula fields before submission.\n- `removeNonSubmittableFields: false` — Submits all fields, potentially causing errors if restricted or read-only fields are included.\n- Omitting the property defaults to false, meaning no automatic removal of non-submittable"},"document":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the document within the Salesforce system, serving as the primary key that distinctly identifies each document record. This identifier is immutable once assigned and must not be altered or reused to maintain data integrity. It is essential for referencing the document in API calls, queries, integrations, and any operations involving the document. The ID is a base-62 encoded string that can be either 15 characters long, which is case-sensitive, or 18 characters long, which includes a case-insensitive checksum to enhance reliability in external systems. Proper handling of this ID ensures accurate linkage and manipulation of document records across various Salesforce processes and integrations.  \n**Field behavior:**  \n- Acts as the primary key uniquely identifying a document record within Salesforce.  \n- Immutable after creation; should never be changed or reassigned.  \n- Used consistently to reference the document across API calls, queries, and integrations.  \n- Case-sensitive and may vary in length (15 or 18 characters).  \n- Serves as a critical reference point for establishing relationships with other records.  \n**Implementation guidance:**  \n- Must be generated exclusively by Salesforce or the managing system to guarantee uniqueness.  \n- Should be handled as a string to accommodate Salesforce’s ID formats.  \n- Validate incoming IDs against Salesforce ID format standards to ensure correctness.  \n- Avoid manual generation or modification of IDs to prevent data inconsistencies.  \n- Use the 18-character format when possible to leverage the checksum for case-insensitive environments.  \n**Examples:**  \n- \"0151t00000ABCDE\"  \n- \"0151t00000ABCDEAAA\"  \n**Important notes:**  \n- IDs are case-sensitive; 15-character IDs are case-sensitive, while 18-character IDs include a case-insensitive checksum.  \n- Do not manually create or alter IDs; always use Salesforce-generated values.  \n- This ID is critical for performing document updates, deletions, retrievals, and establishing relationships.  \n- Using the correct ID format is essential for API compatibility and data integrity.  \n**Dependency chain:**  \n- Required for all operations on salesforce.document entities, including CRUD actions.  \n- Other fields and processes may reference this ID to link related records or trigger actions.  \n- Integral to workflows, triggers, and integrations that depend on document identification.  \n**Technical** DETAILS:  \n- Salesforce record IDs are base-62 encoded strings.  \n- IDs can be 15 characters (case-sensitive) or 18"},"name":{"type":"string","description":"The name of the document as stored in Salesforce, serving as a unique and human-readable identifier within the Salesforce environment. This string value represents the document's title or label, enabling users and systems to easily recognize, retrieve, and manage the document across user interfaces, search functionalities, and API interactions. It is a critical attribute used to reference the document in various operations such as creation, updates, searches, display contexts, and integration workflows, ensuring clear identification and organization within folders or libraries. The name plays a pivotal role in document management by facilitating sorting, filtering, and categorization, and it must adhere to Salesforce’s naming conventions and constraints to maintain system integrity and usability.\n\n**Field behavior**\n- Acts as the primary human-readable identifier for the document.\n- Used in UI elements, search queries, and API calls to locate or reference the document.\n- Typically mandatory when creating or updating document records.\n- Should be unique within the scope of the relevant Salesforce object, folder, or library to avoid ambiguity.\n- Influences sorting, filtering, and organization of documents within lists or folders.\n- Changes to the name may trigger updates or validations in dependent systems or references.\n- Supports international characters to accommodate global users.\n\n**Implementation guidance**\n- Choose descriptive yet concise names to enhance clarity, usability, and discoverability.\n- Validate input to exclude unsupported special characters and ensure compliance with Salesforce naming rules.\n- Enforce uniqueness constraints within the relevant context to prevent conflicts and confusion.\n- Consider the impact of renaming documents on existing references, hyperlinks, integrations, and audit trails.\n- Account for potential length restrictions and implement truncation or user prompts as necessary.\n- Support internationalization by allowing UTF-8 encoded characters where appropriate.\n- Implement validation to prevent use of reserved words or prohibited characters.\n\n**Examples**\n- \"Quarterly Sales Report Q1 2024\"\n- \"Employee Handbook\"\n- \"Project Plan - Alpha Release\"\n- \"Marketing Strategy Overview\"\n- \"Client Contract - Acme Corp\"\n\n**Important notes**\n- The maximum length of the name may be limited (commonly up to 255 characters), depending on Salesforce configuration.\n- Renaming a document can affect external references, hyperlinks, or integrations relying on the original name.\n- Case sensitivity in name comparisons may vary based on Salesforce settings and should be considered when implementing logic.\n- Special characters and whitespace handling should align with Salesforce’s accepted standards to avoid errors.\n- Avoid using reserved words or prohibited characters to ensure compatibility and"},"folderId":{"type":"string","description":"The unique identifier of the folder within Salesforce where the document is stored. This ID is essential for organizing, categorizing, and managing documents efficiently within the Salesforce environment. It enables precise association of documents to specific folders, facilitating streamlined retrieval, access control, and hierarchical organization of document assets. By linking a document to a folder, it inherits the folder’s permissions and visibility settings, ensuring consistent access management and supporting structured document workflows. Proper use of this identifier allows for effective filtering, querying, and management of documents based on their folder location, which is critical for maintaining an organized and secure document repository.\n\n**Field behavior**\n- Identifies the exact folder containing the document.\n- Associates the document with a designated folder for structured organization.\n- Must reference a valid, existing folder ID within Salesforce.\n- Typically represented as a string conforming to Salesforce ID formats (15- or 18-character).\n- Influences document visibility and access based on folder permissions.\n- Changing the folderId moves the document to a different folder, affecting its context and access.\n- Inherits folder-level sharing and security settings automatically.\n\n**Implementation guidance**\n- Verify that the folderId exists and is accessible in the Salesforce instance before assignment.\n- Validate the folderId format to ensure it matches Salesforce’s 15- or 18-character ID standards.\n- Use this field to filter, query, or categorize documents by their folder location in API operations.\n- When creating or updating documents, setting this field assigns or moves the document to the specified folder.\n- Handle permission checks to ensure the user or integration has rights to assign or access the folder.\n- Consider the impact on sharing rules, workflows, and automation triggered by folder changes.\n- Ensure case sensitivity is preserved when handling folder IDs.\n\n**Examples**\n- \"00l1t000003XyzA\" (example of a Salesforce folder ID)\n- \"00l5g000004AbcD\"\n- \"00l9m00000FghIj\"\n\n**Important notes**\n- The folderId must be valid and the folder must be accessible by the user or integration performing the operation.\n- Using an incorrect or non-existent folderId will cause errors or result in improper document categorization.\n- Folder-level permissions in Salesforce can restrict the ability to assign documents or access folder contents.\n- Changes to folderId may affect document sharing, visibility settings, and trigger related workflows.\n- Folder IDs are case-sensitive and must be handled accordingly.\n- Moving documents between folders"},"contentType":{"type":"string","description":"The MIME type of the document, specifying the exact format of the file content stored within the Salesforce document. This field enables systems and applications to accurately identify, process, render, or handle the document by indicating its media type, such as PDF, image, audio, or text formats. It adheres to standard MIME type conventions (type/subtype) and may include additional parameters like character encoding when necessary. Properly setting this field ensures seamless integration, correct display, appropriate application usage, and reliable content negotiation across diverse platforms and services. It plays a critical role in workflows, security scanning, compliance validation, and API interactions by guiding how the document is stored, transmitted, and interpreted.\n\n**Field behavior**\n- Defines the media type of the document content to guide processing, rendering, and handling.\n- Identifies the specific file format (e.g., PDF, JPEG, DOCX) for compatibility and validation purposes.\n- Facilitates content negotiation and appropriate response handling between systems and applications.\n- Typically formatted as a standard MIME type string (type/subtype), optionally including parameters such as charset.\n- Influences document handling in workflows, security scans, user interfaces, and API interactions.\n- Serves as a key attribute for determining how the document is stored, transmitted, and displayed.\n\n**Implementation guidance**\n- Always use valid MIME types conforming to official standards (e.g., \"application/pdf\", \"image/png\") to ensure broad compatibility.\n- Verify that the contentType accurately reflects the actual file content to prevent processing errors or misinterpretation.\n- Update the contentType promptly if the document format changes to maintain consistency and reliability.\n- Utilize this field to enable correct content handling in APIs, integrations, client applications, and automated workflows.\n- Include relevant parameters (such as charset for text-based documents) when applicable to provide additional context.\n- Consider validating the contentType against the file extension and content during upload or processing to enhance data integrity.\n\n**Examples**\n- \"application/pdf\" for PDF documents.\n- \"image/jpeg\" for JPEG image files.\n- \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" for Microsoft Word DOCX files.\n- \"text/plain; charset=utf-8\" for plain text files with UTF-8 encoding.\n- \"application/json\" for JSON formatted documents.\n- \"audio/mpeg\" for MP3 audio files.\n\n**Important notes**\n- Incorrect or mismatched contentType values can cause improper document rendering, processing failures,"},"developerName":{"type":"string","description":"developerName is a unique, developer-friendly identifier assigned to the document within the Salesforce environment. It acts as a stable and consistent reference used primarily in programmatic contexts such as Apex code, API calls, metadata configurations, deployment scripts, and integration workflows. This identifier ensures reliable access, manipulation, and management of the document across various Salesforce components and external systems. The value should be concise, descriptive, and strictly adhere to Salesforce naming conventions to prevent conflicts and ensure maintainability. Typically, it employs camelCase or underscores, excludes spaces and special characters, and is designed to remain unchanged over time to avoid breaking dependencies or integrations.\n\n**Field behavior**\n- Serves as a unique and immutable identifier for the document within the Salesforce org.\n- Widely used in code, APIs, metadata, deployment processes, and integrations to reference the document reliably.\n- Should remain stable and not be altered frequently to maintain integration and system stability.\n- Enforces naming conventions that disallow spaces and special characters, favoring camelCase or underscores.\n- Case-insensitive in many contexts but consistent casing is recommended for clarity and maintainability.\n- Acts as a key reference point in metadata relationships and dependency mappings.\n\n**Implementation guidance**\n- Ensure the developerName is unique within the Salesforce environment to prevent naming collisions.\n- Follow Salesforce naming rules: start with a letter, use only alphanumeric characters and underscores.\n- Avoid reserved keywords, spaces, special characters, and punctuation marks.\n- Keep the identifier concise yet descriptive enough to clearly convey the document’s purpose or function.\n- Validate the developerName against Salesforce’s character set, length restrictions, and naming policies before deployment.\n- Plan for the developerName to remain stable post-deployment to avoid breaking references and integrations.\n- Use consistent casing conventions (camelCase or underscores) to improve readability and reduce errors.\n- Incorporate meaningful terms that reflect the document’s role or content for easier identification.\n\n**Examples**\n- \"InvoiceTemplate\"\n- \"CustomerAgreementDoc\"\n- \"SalesReport2024\"\n- \"ProductCatalog_v2\"\n- \"EmployeeOnboardingGuide\"\n\n**Important notes**\n- Changing the developerName after deployment can disrupt integrations, metadata references, and dependent components.\n- It is distinct from the document’s display name, which can include spaces and be more user-friendly.\n- While Salesforce treats developerName as case-insensitive in many scenarios, maintaining consistent casing improves readability and maintenance.\n- Commonly referenced in metadata APIs, deployment tools, automation scripts, and"},"isInternalUseOnly":{"type":"boolean","description":"isInternalUseOnly indicates whether the document is intended exclusively for internal use within the organization and must not be shared with external parties. This flag is critical for protecting sensitive, proprietary, or confidential information by restricting access strictly to authorized internal users. It serves as a definitive marker within the document’s metadata to enforce internal visibility policies, prevent accidental or unauthorized external distribution, and support compliance with organizational security standards. Proper handling of this flag ensures that internal communications, strategic plans, financial data, and other sensitive materials remain protected from exposure outside the company.\n\n**Field behavior**\n- When set to true, the document is strictly restricted to internal users and must not be accessible or shared with any external entities.\n- When false or omitted, the document may be accessible to external users, subject to other access control mechanisms.\n- Commonly used to flag documents containing sensitive business strategies, internal communications, proprietary data, or confidential policies.\n- Influences user interface elements and API responses to hide, restrict, or clearly label document visibility accordingly.\n- Changes to this flag can trigger notifications, workflow actions, or audit events to ensure proper handling and compliance.\n- May affect document indexing, search results, and export capabilities to prevent unauthorized dissemination.\n\n**Implementation guidance**\n- Always verify this flag before granting document access in both user interfaces and backend APIs to ensure compliance with internal use policies.\n- Use in conjunction with role-based access controls, group memberships, document classification labels, and data sensitivity tags to enforce comprehensive security.\n- Carefully manage updates to this flag to prevent inadvertent exposure of confidential information; consider implementing approval workflows and multi-level reviews for changes.\n- Implement audit logging for all changes to this flag to support compliance, traceability, and forensic investigations.\n- Ensure all integrated systems, third-party applications, and data export mechanisms respect this flag to maintain consistent access restrictions.\n- Provide clear UI indicators, warnings, or access restrictions to users when accessing documents marked as internal use only.\n- Regularly review and validate the accuracy of this flag to maintain up-to-date security postures.\n\n**Examples**\n- true: A confidential internal report on upcoming product launches accessible only to employees.\n- false: A publicly available user manual or product datasheet intended for customers and partners.\n- true: Internal HR policies and procedures documents restricted to company staff.\n- false: Press releases or marketing materials distributed externally.\n- true: Financial forecasts and budget plans shared exclusively within the finance department.\n- true: Internal audit findings and compliance reports"},"isPublic":{"type":"boolean","description":"Indicates whether the document is publicly accessible to all users or restricted exclusively to authorized users within the system. This property governs the document's visibility and access permissions, determining if authentication is required to view its content. When set to true, the document becomes openly available without any access controls, allowing anyone—including unauthenticated users—to access it freely. Conversely, setting this flag to false enforces security measures that limit access based on user roles, permissions, and organizational sharing rules, ensuring that only authorized personnel can view the document. This setting directly impacts how the document appears in search results, sharing interfaces, and external indexing services, making it a critical factor in managing data privacy, compliance, and information governance.\n\n**Field behavior**\n- When true, the document is accessible by anyone, including unauthenticated users.\n- When false, access is restricted to users with explicit permissions or roles.\n- Directly influences the document’s visibility in search results, sharing interfaces, and external indexing.\n- Changes to this property take immediate effect, altering who can view the document.\n- Modifications may trigger updates in access control enforcement mechanisms.\n\n**Implementation guidance**\n- Use this property to clearly define and enforce document sharing and visibility policies.\n- Always set to false for sensitive, confidential, or proprietary documents to prevent unauthorized access.\n- Implement strict validation to ensure only users with appropriate administrative rights can modify this property.\n- Consider organizational compliance requirements, privacy regulations, and data security standards before making documents public.\n- Regularly monitor and audit changes to this property to maintain security oversight and compliance.\n- Plan changes carefully, as toggling this setting can have immediate and broad impact on document accessibility.\n\n**Examples**\n- true: A publicly available product catalog, marketing brochure, or press release intended for external audiences.\n- false: Internal project plans, financial reports, HR documents, or any content restricted to company personnel.\n\n**Important notes**\n- Making a document public may expose it to indexing by search engines if accessible via public URLs.\n- Changes to this property have immediate effect on document accessibility; ensure appropriate change management.\n- Ensure alignment with corporate governance, legal requirements, and data protection policies when toggling public access.\n- Public documents should be reviewed periodically to confirm that their visibility remains appropriate.\n- Unauthorized changes to this property can lead to data leaks or compliance violations.\n\n**Dependency chain**\n- Access control depends on user roles, profiles, and permission sets configured within the system.\n- Interacts"}},"description":"The 'document' property represents a digital file or record associated with a Salesforce entity, encompassing a wide range of file types such as attachments, reports, images, contracts, spreadsheets, and other files stored within the Salesforce platform. It acts as a comprehensive container that includes both the file's binary content—typically encoded in base64—and its associated metadata, such as file name, description, MIME type, and tags. This property enables seamless integration, retrieval, upload, update, and management of files within Salesforce-related workflows and processes. By linking relevant documents directly to Salesforce records, it enhances data organization, accessibility, collaboration, and compliance across various business functions. Additionally, it supports versioning, audit trails, and can trigger automation processes like workflows, validation rules, and triggers, ensuring robust document lifecycle management within the Salesforce ecosystem.\n\n**Field behavior**\n- Contains both the binary content (usually base64-encoded) and descriptive metadata of a file linked to a Salesforce record.\n- Supports a diverse array of file types including PDFs, images, reports, contracts, spreadsheets, and other common document formats.\n- Facilitates operations such as uploading new files, retrieving existing files, updating metadata, deleting documents, and managing versions within Salesforce.\n- May be required or optional depending on the specific API operation, object context, or organizational business rules.\n- Changes to the document content or metadata can trigger Salesforce workflows, triggers, validation rules, or automation processes.\n- Maintains versioning and audit trails when integrated with Salesforce Content or Files features.\n- Respects Salesforce security and sharing settings to control access and protect sensitive information.\n\n**Implementation guidance**\n- Encode the document content in base64 format when transmitting via API to ensure data integrity and compatibility.\n- Validate file size and type against Salesforce platform limits and organizational policies before upload to prevent errors.\n- Provide comprehensive metadata including file name, description, content type (MIME type), file extension, and relevant tags or categories to facilitate search, classification, and governance.\n- Manage user permissions and access controls in accordance with Salesforce security and sharing settings to safeguard sensitive data.\n- Use appropriate Salesforce API endpoints (e.g., /sobjects/Document, Attachment, ContentVersion) and HTTP methods aligned with the document type and intended operation.\n- Implement robust error handling to manage responses related to file size limits, unsupported formats, permission denials, or network issues.\n- Leverage Salesforce Content or Files features for enhanced document management capabilities such as version control, sharing,"},"attachment":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the attachment within the Salesforce system, serving as the immutable primary key for the attachment object. This ID is essential for accurately referencing, retrieving, updating, or deleting the specific attachment record through API calls. It must conform to Salesforce's standardized ID format, which can be either a 15-character case-sensitive or an 18-character case-insensitive alphanumeric string, with the 18-character version preferred for integrations due to its case insensitivity and reduced risk of errors. While the ID itself does not contain metadata about the attachment's content, it is crucial for linking the attachment to related Salesforce records and ensuring precise operations on the attachment resource across the platform.\n\n**Field behavior**\n- Acts as the unique and immutable primary key for the attachment record.\n- Required for all API operations involving the attachment, including retrieval, updates, and deletions.\n- Ensures consistent and precise identification of the attachment in all Salesforce API interactions.\n- Remains constant and unchangeable throughout the lifecycle of the attachment once created.\n\n**Implementation guidance**\n- Must strictly adhere to Salesforce ID formats: either 15-character case-sensitive or 18-character case-insensitive alphanumeric strings.\n- Should be dynamically obtained from Salesforce API responses when creating or querying attachments to ensure accuracy.\n- Avoid hardcoding or manually entering IDs to prevent data integrity issues and API errors.\n- Validate the ID format programmatically before use to ensure compatibility with Salesforce API requirements.\n- Prefer using the 18-character ID in integration scenarios to minimize case sensitivity issues.\n\n**Examples**\n- \"00P1t00000XyzAbCDE\"\n- \"00P1t00000XyzAbCDEAAA\"\n- \"00P3t00000LmNOpQR1\"\n\n**Important notes**\n- The 15-character ID is case-sensitive, whereas the 18-character ID is case-insensitive and recommended for integration scenarios.\n- Using an incorrect, malformed, or outdated ID will result in API errors or failed operations.\n- The ID does not convey any descriptive information about the attachment’s content or metadata.\n- Always retrieve and use the ID directly from Salesforce API responses to ensure validity and prevent errors.\n- The ID is unique within the Salesforce org and cannot be reused or duplicated.\n\n**Dependency chain**\n- Generated automatically by Salesforce upon creation of the attachment record.\n- Utilized by other API calls and properties that require precise attachment identification.\n- Linked to parent or related Salesforce objects, integrating the attachment"},"name":{"type":"string","description":"The name property represents the filename of the attachment within Salesforce and serves as the primary identifier for the attachment. It is crucial for enabling easy recognition, retrieval, and management of attachments both in the user interface and through API operations. This property typically includes the file extension, which specifies the file type and ensures proper handling, display, and processing of the file across different systems and platforms. The filename should be meaningful, descriptive, and concise to provide clarity and ease of use, especially when managing multiple attachments linked to a single parent record. Adhering to proper naming conventions helps maintain organization, improves searchability, and prevents conflicts or ambiguities that could arise from duplicate or unclear filenames.\n\n**Field behavior**\n- Stores the filename of the attachment as a string value.\n- Acts as the key identifier for the attachment within the Salesforce environment.\n- Displayed prominently in the Salesforce UI and API responses when viewing or managing attachments.\n- Should be unique within the context of the parent record to avoid confusion.\n- Includes file extensions (e.g., .pdf, .jpg, .docx) to denote the file type and facilitate correct processing.\n- Case sensitivity may apply depending on the context, affecting retrieval and display.\n- Used in URL generation and API endpoints, impacting accessibility and referencing.\n\n**Implementation guidance**\n- Validate filenames to exclude prohibited characters (such as \\ / : * ? \" < > |) and other special characters that could cause errors or security vulnerabilities.\n- Ensure the filename length does not exceed Salesforce’s maximum limit, typically 255 characters.\n- Always include the appropriate and accurate file extension to enable correct file type recognition.\n- Use clear, descriptive, and consistent naming conventions that reflect the content or purpose of the attachment.\n- Perform validation checks before upload to prevent failures or inconsistencies.\n- Avoid renaming attachments post-upload to maintain integrity of references and links.\n- Consider localization and encoding standards to support international characters where applicable.\n\n**Examples**\n- \"contract_agreement.pdf\"\n- \"profile_picture.jpg\"\n- \"sales_report_q1_2024.xlsx\"\n- \"meeting_notes.txt\"\n- \"project_plan_v2.docx\"\n\n**Important notes**\n- Filenames are case-sensitive in certain contexts, which can impact retrieval and display.\n- Avoid special characters that may interfere with URLs, file systems, or API processing.\n- Renaming an attachment after upload can disrupt existing references or links to the file.\n- Consistency between the filename and the actual content enhances user"},"parentId":{"type":"string","description":"The unique identifier of the parent Salesforce record to which the attachment is linked. This field establishes a direct and essential relationship between the attachment and its parent object, such as an Account, Contact, Opportunity, or any other Salesforce record type that supports attachments. It ensures that attachments are correctly associated within the Salesforce data model, enabling accurate retrieval, management, and organization of attachments in relation to their parent records. By linking attachments to their parent records, this field facilitates data integrity, efficient querying, and seamless navigation between related records. It is a mandatory field when creating or updating attachments and must reference a valid, existing Salesforce record ID. Proper use of this field guarantees referential integrity, enforces access controls based on parent record permissions, and impacts the visibility and lifecycle of the attachment within the Salesforce environment.\n\n**Field behavior**\n- Represents the Salesforce record ID that the attachment is associated with.\n- Mandatory when creating or updating an attachment to specify its parent record.\n- Must correspond to an existing Salesforce record ID of a valid object type that supports attachments.\n- Enables querying, filtering, and reporting of attachments based on their parent record.\n- Enforces referential integrity between attachments and parent records.\n- Changes to the parent record, including deletion or modification, can affect the visibility, accessibility, and lifecycle of the attachment.\n- Access to the attachment is governed by the permissions and sharing settings of the parent record.\n\n**Implementation guidance**\n- Ensure the parentId is a valid 15- or 18-character Salesforce record ID, with preference for 18-character IDs to avoid case sensitivity issues.\n- Verify the existence and active status of the parent record before assigning the parentId.\n- Confirm that the parent object type supports attachments to prevent errors during attachment operations.\n- Handle user permissions and sharing settings to ensure appropriate access rights to the parent record and its attachments.\n- Use Salesforce APIs or metadata services to validate the parentId format, object type, and existence.\n- Implement robust error handling for invalid, non-existent, or unauthorized parentId values during attachment creation or updates.\n- Consider the impact of parent record lifecycle events (such as deletion or archiving) on associated attachments.\n\n**Examples**\n- \"0011a000003XYZ123\" (Account record ID)\n- \"0031a000004ABC456\" (Contact record ID)\n- \"0061a000005DEF789\" (Opportunity record ID)\n\n**Important notes**\n- The parentId is critical for maintaining data integrity and ensuring"},"contentType":{"type":"string","description":"The MIME type of the attachment content, specifying the exact format and nature of the file (e.g., \"image/png\", \"application/pdf\", \"text/plain\"). This information is critical for correctly interpreting, processing, and displaying the attachment across different systems and platforms. It enables clients and servers to determine how to handle the file, whether to render it inline, prompt for download, or apply specific processing rules. Accurate contentType values facilitate validation, security scanning, and compatibility checks, ensuring that the attachment is managed appropriately throughout its lifecycle. Properly defined contentType values improve interoperability, user experience, and system security by enabling precise content handling and reducing the risk of errors or vulnerabilities.\n\n**Field behavior**\n- Defines the media type of the attachment to guide processing, rendering, and handling decisions.\n- Used by clients, servers, and intermediaries to determine appropriate actions such as inline display, download prompts, or specialized processing.\n- Supports validation of file format during upload, storage, and download operations to ensure data integrity.\n- Influences security measures including filtering, scanning for malware, and enforcing content policies.\n- Affects user interface elements like preview generation, icon selection, and content categorization.\n- May be used in logging, auditing, and analytics to track content types handled by the system.\n\n**Implementation guidance**\n- Must strictly adhere to the standard MIME type format as defined by IANA (e.g., \"type/subtype\").\n- Should accurately reflect the actual content format to prevent misinterpretation or processing errors.\n- Use generic types such as \"application/octet-stream\" only when the specific type cannot be determined.\n- Include optional parameters (e.g., charset, boundary) when relevant to fully describe the content.\n- Ensure the contentType is set or updated whenever the attachment content changes to maintain consistency.\n- Validate the contentType against the file extension and actual content where feasible to enhance reliability.\n- Consider normalizing the contentType to lowercase to maintain consistency across systems.\n- Avoid relying solely on contentType for security decisions; combine with other metadata and content inspection.\n\n**Examples**\n- \"image/jpeg\"\n- \"application/pdf\"\n- \"text/plain; charset=utf-8\"\n- \"application/vnd.ms-excel\"\n- \"audio/mpeg\"\n- \"video/mp4\"\n- \"application/json\"\n- \"multipart/form-data; boundary=something\"\n- \"application/zip\"\n\n**Important notes**\n- An incorrect, missing, or misleading contentType can lead to improper"},"isPrivate":{"type":"boolean","description":"Indicates whether the attachment is designated as private, restricting its visibility exclusively to users who have been explicitly authorized to access it. This setting is critical for maintaining confidentiality and ensuring that sensitive or proprietary information contained within attachments is accessible only to appropriate personnel. When marked as private, the attachment’s visibility is constrained based on the user’s permissions and the sharing settings of the parent record, thereby reinforcing data security within the Salesforce environment. This field directly controls the scope of access to the attachment, dynamically influencing who can view or interact with it, and must be managed carefully to align with organizational privacy policies and compliance requirements.\n\n**Field behavior**\n- When set to true, the attachment is accessible only to users with explicit authorization, effectively hiding it from unauthorized users.\n- When set to false or omitted, the attachment inherits the visibility of its parent record and is accessible to all users who have access to that record.\n- Directly affects the attachment’s visibility scope and access control within Salesforce.\n- Changes to this field immediately impact user access permissions for the attachment.\n- Does not override broader organizational sharing settings but further restricts access.\n- The privacy setting applies in conjunction with existing record-level and object-level security controls.\n\n**Implementation guidance**\n- Use this field to enforce strict data privacy and protect sensitive attachments in compliance with organizational policies and regulatory requirements.\n- Ensure that user roles, profiles, and sharing rules are properly configured to respect this privacy setting.\n- Validate that the value is a boolean (true or false) to prevent data inconsistencies.\n- Evaluate the implications on existing sharing rules and record-level security before modifying this field.\n- Consider implementing audit logging to track changes to this privacy setting for security and compliance purposes.\n- Test changes in a non-production environment to understand the impact on user access before deploying to production.\n- Communicate changes to relevant stakeholders to avoid confusion or unintended access issues.\n\n**Examples**\n- `true` — The attachment is private and visible only to users explicitly authorized to access it.\n- `false` — The attachment is public within the context of the parent record and visible to all users who have access to that record.\n\n**Important notes**\n- Setting this field to true does not grant access to users who lack permission to view the parent record; it only further restricts access.\n- Modifying this setting can significantly affect user access and should be managed carefully to avoid unintended data exposure or access denial.\n- Some Salesforce editions or configurations may have limitations or specific behaviors regarding attachment privacy"},"description":{"type":"string","description":"A textual description providing additional context, detailed information, and clarifying the content, purpose, or relevance of the attachment within Salesforce. This field enhances user understanding and facilitates easier identification, organization, and management of attachments by serving as descriptive metadata that complements the attachment’s core data without containing the attachment content itself. It supports plain text with special characters, punctuation, and full Unicode, enabling expressive, internationalized, and accessible descriptions. Typically displayed in user interfaces alongside attachment details, this description also plays a crucial role in search, filtering, sorting, and reporting operations, improving overall data discoverability and usability.\n\n**Field behavior**\n- Optional and editable field used to describe the attachment’s nature or relevance.\n- Supports plain text input including special characters, punctuation, and Unicode.\n- Displayed prominently in user interfaces where attachment details appear.\n- Utilized in search, filtering, sorting, and reporting functionalities to enhance data retrieval.\n- Does not contain or affect the actual attachment content or file storage.\n\n**Implementation guidance**\n- Encourage concise yet informative descriptions to aid quick identification.\n- Validate and sanitize input to prevent injection attacks and ensure data integrity.\n- Support full Unicode to accommodate international characters and symbols.\n- Enforce a reasonable maximum length (commonly 255 characters) to maintain performance and UI consistency.\n- Avoid inclusion of sensitive or confidential information unless appropriate security controls are applied.\n- Implement trimming or sanitization routines to maintain clean and consistent data.\n\n**Examples**\n- \"Quarterly financial report for Q1 2024.\"\n- \"Signed contract agreement with client XYZ.\"\n- \"Product brochure for the new model launch.\"\n- \"Meeting notes and action items from April 15, 2024.\"\n- \"Technical specification document for project Alpha.\"\n\n**Important notes**\n- This field contains descriptive metadata only, not the attachment content.\n- Sensitive information should be excluded or protected according to security policies.\n- Changes to this field do not impact the attachment file or its storage.\n- Consistent and accurate descriptions improve attachment management and user experience.\n- Plays a key role in enhancing searchability and filtering within Salesforce.\n\n**Dependency chain**\n- Directly associated with the attachment entity in Salesforce.\n- Often used in conjunction with other metadata fields such as attachment name, type, owner, size, and creation date.\n- Supports comprehensive context when combined with related records and metadata.\n\n**Technical details**\n- Data type: string.\n- Maximum length: typically 255 characters, configurable"}},"description":"The attachment property represents a file or document linked to a Salesforce record, encompassing both the binary content of the file and its associated metadata. It serves as a versatile container for a wide variety of file types—including documents, images, PDFs, spreadsheets, and emails—allowing users to enrich Salesforce records such as emails, cases, opportunities, or any standard or custom objects with supplementary information or evidence. This property supports comprehensive lifecycle management, enabling operations such as uploading new files, retrieving existing attachments, updating file details, and deleting attachments as needed. By maintaining a direct association with the parent Salesforce record through identifiers like ParentId, it ensures accurate linkage and seamless integration within the Salesforce ecosystem. The attachment property facilitates enhanced collaboration, documentation, and data completeness by providing a centralized and manageable way to handle files related to Salesforce data.\n\n**Field behavior**\n- Stores the binary content or a reference to a file linked to a specific Salesforce record.\n- Supports multiple file formats including but not limited to documents, images, PDFs, spreadsheets, and email files.\n- Enables attaching additional context, documentation, or evidence to Salesforce objects.\n- Allows full lifecycle management: create, read, update, and delete operations on attachments.\n- Contains metadata such as file name, MIME content type, file size, and the ID of the parent Salesforce record.\n- Reflects changes immediately in the associated Salesforce record’s file attachments.\n- Maintains a direct association with the parent record through the ParentId field to ensure accurate linkage.\n- Supports integration with Salesforce Files (ContentVersion and ContentDocument) for advanced file management features.\n\n**Implementation guidance**\n- Encode file content using Base64 encoding when transmitting via Salesforce APIs to ensure data integrity.\n- Validate file size against Salesforce limits (commonly up to 25 MB per attachment) and confirm supported file types before upload.\n- Use the ParentId field to correctly associate the attachment with the intended Salesforce record.\n- Implement robust permission checks to ensure only authorized users can upload, view, or modify attachments.\n- Consider leveraging Salesforce Files (ContentVersion and ContentDocument objects) for enhanced file management capabilities, versioning, and sharing features, especially for new implementations.\n- Handle error responses gracefully, including those related to file size limits, unsupported formats, or permission issues.\n- Ensure metadata fields such as file name and content type are accurately populated to facilitate proper handling and display.\n- When updating attachments, confirm that the new content or metadata aligns with organizational policies and compliance requirements.\n- Optimize performance"},"contentVersion":{"type":"object","properties":{"contentDocumentId":{"type":"string","description":"contentDocumentId is the unique identifier for the parent Content Document associated with a specific Content Version in Salesforce. It serves as the fundamental reference that links all versions of the same document, enabling comprehensive version control and efficient document management within the Salesforce ecosystem. This ID is immutable once the Content Version is created, ensuring a stable and consistent connection across all iterations of the document. By maintaining this linkage, Salesforce facilitates seamless navigation between individual document versions and their overarching document entity, supporting streamlined retrieval, updates, collaboration, and organizational workflows. This field is automatically assigned by Salesforce during the creation of a Content Version and is critical for maintaining the integrity and traceability of document versions throughout their lifecycle.\n\n**Field behavior**\n- Represents the unique ID of the parent Content Document to which the Content Version belongs.\n- Acts as the primary link connecting multiple versions of the same document.\n- Immutable after the Content Version record is created, preserving data integrity.\n- Enables navigation from a specific Content Version to its parent Content Document.\n- Automatically assigned and managed by Salesforce during Content Version creation.\n- Essential for maintaining consistent version histories and document relationships.\n\n**Implementation guidance**\n- Use this ID to reference or retrieve the entire document across all its versions.\n- Verify that the ID corresponds to an existing Content Document before usage.\n- Utilize in SOQL queries and API calls to associate Content Versions with their parent document.\n- Avoid modifying this field directly; it is managed internally by Salesforce.\n- Employ this ID to support document lifecycle management, version tracking, and collaboration features.\n- Leverage this field to ensure accurate linkage in integrations and custom development.\n\n**Examples**\n- \"0691t00000XXXXXXAAA\"\n- \"0692a00000YYYYYYBBB\"\n- \"0693b00000ZZZZZZCCC\"\n\n**Important notes**\n- Distinct from the Content Version ID, which identifies a specific version rather than the entire document.\n- Mandatory for any valid Content Version record to ensure proper document association.\n- Cannot be null or empty for Content Version records.\n- Critical for preserving document integrity, version tracking, and enabling collaborative document management.\n- Changes to this ID are not permitted post-creation to avoid breaking version linkages.\n- Plays a key role in Salesforce’s document sharing, access control, and audit capabilities.\n\n**Dependency chain**\n- Requires the existence of a valid Content Document record in Salesforce.\n- Directly related to Content Version records representing different iterations of the same document.\n- Integral"},"title":{"type":"string","description":"The title of the content version serves as the primary name or label assigned to this specific iteration of content within Salesforce. It acts as a key identifier that enables users to quickly recognize, differentiate, and manage various versions of content. The title typically summarizes the subject matter, purpose, or significant details about the content, making it easier to locate and understand at a glance. It should be concise yet sufficiently descriptive to effectively convey the essence and context of the content version. Well-crafted titles improve navigation, searchability, and version tracking within content management workflows, enhancing overall user experience and operational efficiency.\n\n**Field behavior**\n- Functions as the main identifier or label for the content version in user interfaces, listings, and reports.\n- Enables users to distinguish between different versions of the same content efficiently.\n- Reflects the content’s subject, purpose, key attributes, or version status.\n- Should be concise but informative to facilitate quick recognition and comprehension.\n- Commonly displayed in search results, filters, version histories, and content libraries.\n- Supports sorting and filtering operations to streamline content management.\n\n**Implementation guidance**\n- Use clear, descriptive titles that accurately represent the content version’s focus or changes.\n- Avoid special characters or formatting that could cause display, parsing, or processing issues.\n- Keep titles within a reasonable length (typically up to 255 characters) to ensure readability across various UI components and devices.\n- Update the title appropriately when creating new versions to reflect significant updates or revisions.\n- Consider including version numbers, dates, or status indicators to enhance clarity and version tracking.\n- Follow organizational naming conventions and standards to maintain consistency.\n\n**Examples**\n- \"Q2 Financial Report 2024\"\n- \"Product Launch Presentation v3\"\n- \"Employee Handbook - Updated March 2024\"\n- \"Marketing Strategy Overview\"\n- \"Website Redesign Proposal - Final Draft\"\n\n**Important notes**\n- Titles do not need to be globally unique but should be meaningful and contextually relevant.\n- Changing the title does not modify the underlying content data or its integrity.\n- Titles are frequently used in search, filtering, sorting, and reporting operations within Salesforce.\n- Consistent naming conventions and formatting improve content management efficiency and user experience.\n- Consider organizational standards or guidelines when defining title formats.\n- Titles should avoid ambiguity to prevent confusion between similar content versions.\n\n**Dependency chain**\n- Part of the ContentVersion object within Salesforce’s content management framework.\n- Often associated with other metadata fields such as description"},"pathOnClient":{"type":"string","description":"The pathOnClient property specifies the original file path of the content as it exists on the client machine before being uploaded to Salesforce. This path serves as a precise reference to the source location of the file on the user's local system, facilitating identification, auditing, and tracking of the file's origin. It captures either the full absolute path or a relative path depending on the client environment and upload context, accurately reflecting the exact location from which the file was sourced. This information is primarily used for informational, diagnostic, and user interface purposes within Salesforce and does not influence how the file is stored, accessed, or secured in the system.\n\n**Field behavior**\n- Represents the full absolute or relative file path on the client device where the file was stored prior to upload.\n- Automatically populated during file upload processes when the client environment provides this information.\n- Used mainly for reference, auditing, and user interface display to provide context about the file’s origin.\n- May be empty, null, or omitted if the file was not uploaded from a client device or if the path information is unavailable or restricted.\n- Does not influence file storage, retrieval, or access permissions within Salesforce.\n- Retains the original formatting and syntax as provided by the client system at upload time.\n\n**Implementation guidance**\n- Capture the file path exactly as provided by the client system at the time of upload, preserving the original format and case sensitivity.\n- Handle differences in file path syntax across operating systems (e.g., backslashes for Windows, forward slashes for Unix/Linux/macOS).\n- Treat this property strictly as metadata for informational purposes; avoid using it for security, access control, or file retrieval logic.\n- Sanitize or mask sensitive directory or user information when displaying or logging this path to protect user privacy and comply with data protection regulations.\n- Ensure compliance with relevant privacy laws and organizational policies when storing or exposing client file paths.\n- Consider truncating, normalizing, or encoding paths if necessary to conform to platform length limits, display constraints, or to prevent injection vulnerabilities.\n- Provide clear documentation to users and administrators about the non-security nature of this property to prevent misuse.\n\n**Examples**\n- \"C:\\\\Users\\\\JohnDoe\\\\Documents\\\\Report.pdf\"\n- \"/home/janedoe/projects/presentation.pptx\"\n- \"Documents/Invoices/Invoice123.pdf\"\n- \"Desktop/Project Files/DesignMockup.png\"\n- \"../relative/path/to/file.txt\"\n\n**Important notes**\n- This property does not"},"tagCsv":{"type":"string","description":"A comma-separated string representing the tags associated with the content version in Salesforce. This field enables the assignment of multiple descriptive tags within a single string, facilitating efficient categorization, organization, and enhanced searchability of the content. Tags serve as metadata labels that help users filter, locate, and manage content versions effectively across the platform by grouping related items and supporting advanced search and filtering capabilities. Proper use of this field improves content discoverability, streamlines content management workflows, and supports dynamic content classification.\n\n**Field behavior**\n- Accepts multiple tags separated by commas, typically without spaces, though trimming whitespace is recommended.\n- Tags are used to categorize, organize, and improve the discoverability of content versions.\n- Tags can be added, updated, or removed by modifying the entire string value.\n- Duplicate tags within the string should be avoided to maintain clarity and effectiveness.\n- Tags are generally treated as case-insensitive for search purposes but stored exactly as entered.\n- Changes to this field immediately affect how content versions are indexed and retrieved in search operations.\n\n**Implementation guidance**\n- Ensure individual tags do not contain commas to prevent parsing errors.\n- Validate the tagCsv string for invalid characters, excessive length, and proper formatting.\n- When updating tags, replace the entire string to accurately reflect the current tag set.\n- Trim leading and trailing whitespace from each tag before processing or storage.\n- Utilize this field to enhance search filters, categorization, and content management workflows.\n- Implement application-level checks to prevent duplicate or meaningless tags.\n- Consider standardizing tag formats (e.g., lowercase) to maintain consistency across records.\n\n**Examples**\n- \"marketing,Q2,presentation\"\n- \"urgent,clientA,confidential\"\n- \"releaseNotes,v1.2,approved\"\n- \"finance,yearEnd,reviewed\"\n\n**Important notes**\n- The maximum length of the tagCsv string is subject to Salesforce field size limitations.\n- Tags should be meaningful, consistent, and standardized to maximize their utility.\n- Modifications to tags can impact content visibility and access if used in filtering or permission logic.\n- This field does not enforce uniqueness of tags; application-level logic should handle duplicate prevention.\n- Tags are stored as-is; any normalization or case conversion must be handled externally.\n- Improper formatting or invalid characters may cause errors or unexpected behavior in search and filtering.\n\n**Dependency chain**\n- Closely related to contentVersion metadata and search/filtering functionality.\n- May integrate with Salesforce"},"contentLocation":{"type":"string","description":"contentLocation specifies the precise storage location of the content file associated with the Salesforce ContentVersion record. It identifies where the actual file data resides—whether within Salesforce's internal storage infrastructure, an external content management system, or a linked external repository—and governs how the content is accessed, retrieved, and managed throughout its lifecycle. This field plays a crucial role in content delivery, access permissions, and integration workflows by clearly indicating the source location of the file data. Properly setting and interpreting contentLocation ensures that content is handled correctly according to its storage context, enabling seamless access when stored internally or facilitating robust integration and synchronization with third-party external systems.\n\n**Field behavior**\n- Determines the origin and storage context of the content file linked to the ContentVersion record.\n- Influences the mechanisms and protocols used for accessing, retrieving, and managing the content.\n- Typically assigned automatically by Salesforce based on the upload method or integration configuration.\n- Common values include 'S' for Salesforce internal storage, 'E' for external storage systems, and 'L' for linked external locations.\n- Often read-only to preserve data integrity and prevent unauthorized or unintended modifications.\n- Alterations to this field can impact content visibility, sharing settings, and delivery workflows.\n\n**Implementation guidance**\n- Utilize this field to programmatically distinguish between internally stored content and externally managed files.\n- When integrating with external repositories, ensure contentLocation accurately reflects the external storage to enable correct content handling and retrieval.\n- Avoid manual updates unless supported by Salesforce documentation and accompanied by a comprehensive understanding of the consequences.\n- Validate the contentLocation value prior to processing or delivering content to guarantee proper routing and access control.\n- Confirm that external storage systems are correctly configured, authenticated, and authorized to maintain uninterrupted access.\n- Monitor this field closely during content migration or integration activities to prevent data inconsistencies or access disruptions.\n\n**Examples**\n- 'S' — Content physically stored within Salesforce’s internal storage infrastructure.\n- 'E' — Content residing in an external system integrated with Salesforce, such as a third-party content repository.\n- 'L' — Content located in a linked external location, representing specialized or less common storage configurations.\n\n**Important notes**\n- Manual modifications to contentLocation can lead to broken links, access failures, or data inconsistencies.\n- Salesforce generally manages this field automatically to uphold consistency and reliability.\n- The contentLocation value directly influences content delivery methods and access control policies.\n- External content locations require proper configuration, authentication, and permissions to function correctly"}},"description":"The contentVersion property serves as the unique identifier for a specific iteration of a content item within the Salesforce platform's content management system. It enables precise tracking, retrieval, and management of individual versions or revisions of documents, files, or digital assets. Each contentVersion corresponds to a distinct, immutable state of the content at a given point in time, facilitating robust version control, content integrity, and auditability. When a content item is created or modified, Salesforce automatically generates a new contentVersion to capture those changes, allowing users and systems to access, compare, or revert to previous versions as necessary. This property is fundamental for workflows involving content updates, approvals, compliance tracking, and historical content analysis.\n\n**Field behavior**\n- Uniquely identifies a single, immutable version of a content item within Salesforce.\n- Differentiates between multiple revisions or updates of the same content asset.\n- Automatically created or incremented by Salesforce upon content creation or modification.\n- Used in API operations to retrieve, reference, update metadata, or manage specific content versions.\n- Remains constant once assigned; any content changes result in a new contentVersion record.\n- Supports audit trails by preserving historical versions of content.\n\n**Implementation guidance**\n- Ensure synchronization of contentVersion values with Salesforce to maintain consistency across integrated systems.\n- Utilize this property for version-specific operations such as downloading content, updating metadata, or deleting particular versions.\n- Validate contentVersion identifiers against Salesforce API standards to avoid errors or mismatches.\n- Implement comprehensive error handling for scenarios where contentVersion records are missing, deprecated, or inaccessible due to permission restrictions.\n- Consider access control differences at the version level, as visibility and permissions may vary between versions.\n- Leverage contentVersion in workflows requiring version comparison, rollback, or approval processes.\n\n**Examples**\n- \"0681t000000XyzA\" (initial version of a document)\n- \"0681t000000XyzB\" (a subsequent, updated version reflecting content changes)\n- \"0681t000000XyzC\" (a further revised version incorporating additional edits)\n\n**Important notes**\n- contentVersion is distinct from contentDocumentId; contentVersion identifies a specific version, whereas contentDocumentId refers to the overall document entity.\n- Proper versioning is critical for maintaining content integrity, supporting audit trails, and ensuring regulatory compliance.\n- Access permissions and visibility can differ between content versions, potentially affecting user access and operations.\n- The contentVersion ID typically begins with the prefix"}}},"File":{"type":"object","description":"**CRITICAL: This object is REQUIRED for all file-based import adaptor types.**\n\n**When to include this object**\n\n✅ **MUST SET** when `adaptorType` is one of:\n- `S3Import`\n- `FTPImport`\n- `AS2Import`\n\n❌ **DO NOT SET** for non-file-based imports like:\n- `SalesforceImport`\n- `NetSuiteImport`\n- `HTTPImport`\n- `MongodbImport`\n- `RDBMSImport`\n\n**Minimum required fields**\n\nFor most file imports, you need at minimum:\n- `fileName`: The output file name (supports Handlebars like `{{timestamp}}`)\n- `type`: The file format (json, csv, xml, xlsx)\n- `skipAggregation`: Usually `false` for standard imports\n\n**Example (S3 Import)**\n\n```json\n{\n  \"file\": {\n    \"fileName\": \"customers-{{timestamp}}.json\",\n    \"skipAggregation\": false,\n    \"type\": \"json\"\n  },\n  \"s3\": {\n    \"region\": \"us-east-1\",\n    \"bucket\": \"my-bucket\",\n    \"fileKey\": \"customers-{{timestamp}}.json\"\n  },\n  \"adaptorType\": \"S3Import\"\n}\n```","properties":{"fileName":{"type":"string","description":"**REQUIRED for file-based imports.**\n\nThe name of the file to be created/written. Supports Handlebars expressions for dynamic naming.\n\n**Default behavior**\n- When the user does not specify a file naming convention, **always default to timestamped filenames** using `{{timestamp}}` (e.g., `\"items-{{timestamp}}.csv\"`). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Common patterns**\n- `\"data-{{timestamp}}.json\"` - Timestamped JSON file (DEFAULT — use this pattern when not specified)\n- `\"export-{{date}}.csv\"` - Date-stamped CSV file\n- `\"{{recordType}}-backup.xml\"` - Dynamic record type naming\n\n**Examples**\n- `\"customers-{{timestamp}}.json\"`\n- `\"orders-export.csv\"`\n- `\"inventory-{{date}}.xlsx\"`\n\n**Important**\n- For S3 imports, this should typically match the `s3.fileKey` value\n- For FTP imports, this should typically match the `ftp.fileName` value"},"skipAggregation":{"type":"boolean","description":"Controls whether records are aggregated into a single file or processed individually.\n\n**Values**\n- `false` (DEFAULT): Records are aggregated into a single output file\n- `true`: Each record is written as a separate file\n\n**When to use**\n- **`false`**: Standard batch imports where all records go into one file\n- **`true`**: When each record needs its own file (e.g., individual documents)\n\n**Default:** `false`","default":false},"type":{"type":"string","enum":["json","csv","xml","xlsx","filedefinition"],"description":"**REQUIRED for file-based imports.**\n\nThe format of the output file.\n\n**Options**\n- `\"json\"`: JSON format (most common for API data)\n- `\"csv\"`: Comma-separated values (tabular data)\n- `\"xml\"`: XML format\n- `\"xlsx\"`: Excel spreadsheet format\n- `\"filedefinition\"`: Custom file definition format\n\n**Selection guidance**\n- Use `\"json\"` for structured/nested data, API payloads\n- Use `\"csv\"` for flat tabular data, spreadsheet-like records\n- Use `\"xml\"` for XML-based integrations, SOAP services\n- Use `\"xlsx\"` for Excel-compatible outputs"},"encoding":{"type":"string","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"],"description":"Character encoding for the output file.\n\n**Default:** `\"utf8\"`\n\n**When to change**\n- `\"win1252\"`: Legacy Windows systems\n- `\"utf-16le\"`: Unicode with BOM requirements\n- `\"gb18030\"`: Chinese character sets\n- `\"shiftjis\"`: Japanese character sets","default":"utf8"},"delete":{"type":"boolean","description":"Whether to delete the source file after successful import.\n\n**Values**\n- `true`: Delete source file after processing\n- `false`: Keep source file\n\n**Default:** `false`","default":false},"compressionFormat":{"type":"string","enum":["gzip","zip"],"description":"Compression format for the output file.\n\n**Options**\n- `\"gzip\"`: GZIP compression (.gz)\n- `\"zip\"`: ZIP compression (.zip)\n\n**When to use**\n- Large files that benefit from compression\n- When target system expects compressed files"},"backupPath":{"type":"string","description":"Path where backup copies of files should be stored.\n\n**Examples**\n- `\"backup/\"` - Relative backup folder\n- `\"/archive/2024/\"` - Absolute backup path"},"purgeInternalBackup":{"type":"boolean","description":"Whether to purge internal backup copies after successful processing.\n\n**Default:** `false`","default":false},"batchSize":{"type":"integer","description":"Number of records to include per batch/file when processing large datasets.\n\n**When to use**\n- Large imports that need to be split into multiple files\n- When target system has file size limitations\n\n**Note**\n- Only applicable when `skipAggregation` is `true`"},"encrypt":{"type":"boolean","description":"Whether to encrypt the output file.\n\n**Values**\n- `true`: Encrypt the file (requires PGP configuration)\n- `false`: No encryption\n\n**Default:** `false`","default":false},"csv":{"type":"object","description":"CSV-specific configuration. Only used when `type` is `\"csv\"`.","properties":{"rowDelimiter":{"type":"string","description":"Character(s) used to separate rows. Default is newline.","default":"\n"},"columnDelimiter":{"type":"string","description":"Character(s) used to separate columns. Default is comma.","default":","},"includeHeader":{"type":"boolean","description":"Whether to include header row with column names.","default":true},"wrapWithQuotes":{"type":"boolean","description":"Whether to wrap field values in quotes.","default":false},"replaceTabWithSpace":{"type":"boolean","description":"Replace tab characters with spaces.","default":false},"replaceNewlineWithSpace":{"type":"boolean","description":"Replace newline characters with spaces within fields.","default":false},"truncateLastRowDelimiter":{"type":"boolean","description":"Remove trailing row delimiter from the file.","default":false}}},"json":{"type":"object","description":"JSON-specific configuration. Only used when `type` is `\"json\"`.","properties":{"resourcePath":{"type":"string","description":"JSONPath expression to locate records within the JSON structure."}}},"xml":{"type":"object","description":"XML-specific configuration. Only used when `type` is `\"xml\"`.","properties":{"resourcePath":{"type":"string","description":"XPath expression to locate records within the XML structure."}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"Reference to the file definition resource."}}}}},"FileSystem":{"type":"object","description":"Configuration for FileSystem imports","properties":{"directoryPath":{"type":"string","description":"Directory path to read files from (required)"}},"required":["directoryPath"]},"AiAgent":{"type":"object","description":"AI Agent configuration used by both AiAgentImport and GuardrailImport (ai_agent type).\n\nConfigures which AI provider and model to use, along with instructions, parameter\ntuning, output format, and available tools. Two providers are supported:\n\n- **openai**: OpenAI models (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)\n- **gemini**: Google Gemini models via LiteLLM proxy\n\nA `_connectionId` is optional (BYOK). If not provided on the parent import,\nplatform-managed credentials are used.\n","properties":{"provider":{"type":"string","enum":["openai","gemini"],"default":"openai","description":"AI provider to use.\n\n- **openai**: Uses OpenAI Responses API. Configure via the `openai` object.\n- **gemini**: Uses Google Gemini via LiteLLM. Configure via `litellm` with\n  Gemini-specific overrides in `litellm._overrides.gemini`.\n"},"openai":{"type":"object","description":"OpenAI-specific configuration. Used when `provider` is \"openai\".\n","properties":{"instructions":{"type":"string","maxLength":51200,"description":"System prompt that defines the AI agent's behavior, goals, and constraints.\nMaximum 50 KB.\n"},"model":{"type":"string","description":"OpenAI model identifier.\n"},"reasoning":{"type":"object","description":"Controls depth of reasoning for complex tasks.\n","properties":{"effort":{"type":"string","enum":["minimal","low","medium","high"],"description":"How much reasoning effort the model should invest"},"summary":{"type":"string","enum":["concise","auto","detailed"],"description":"Level of detail in reasoning summaries"}}},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature. Higher values (e.g. 1.5) produce more creative output,\nlower values (e.g. 0.2) produce more focused and deterministic output.\n"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"maxOutputTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the model's response"},"serviceTier":{"type":"string","enum":["auto","default","priority"],"default":"default","description":"OpenAI service tier. \"priority\" provides higher rate limits and\nlower latency at increased cost.\n"},"output":{"type":"object","description":"Output format configuration","properties":{"format":{"type":"object","description":"Controls the structure of the model's output.\n","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text","description":"Output format type.\n\n- **text**: Free-form text response\n- **json_schema**: Structured JSON conforming to a schema\n- **blob**: Binary data output\n"},"name":{"type":"string","description":"Name for the output format (used with json_schema)"},"strict":{"type":"boolean","default":false,"description":"Whether to enforce strict schema validation on output"},"jsonSchema":{"type":"object","description":"JSON Schema for structured output. Required when `format.type` is \"json_schema\".\n","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalproperties":{"type":"boolean"}}}}},"verbose":{"type":"string","enum":["low","medium","high"],"default":"medium","description":"Level of detail in the model's response"}}},"tools":{"type":"array","description":"Tools available to the AI agent during processing.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["web_search","mcp","image_generation","tool"],"description":"Type of tool.\n\n- **web_search**: Search the web for information\n- **mcp**: Connect to an MCP server for additional tools\n- **image_generation**: Generate images\n- **tool**: Reference a Celigo Tool resource\n"},"webSearch":{"type":"object","description":"Web search configuration (empty object to enable)"},"imageGeneration":{"type":"object","description":"Image generation configuration","properties":{"background":{"type":"string","enum":["transparent","opaque"]},"quality":{"type":"string","enum":["low","medium","high"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"]},"outputFormat":{"type":"string","enum":["png","webp","jpeg"]}}},"mcp":{"type":"object","description":"MCP server tool configuration","properties":{"_mcpConnectionId":{"type":"string","format":"objectId","description":"Connection to the MCP server"},"allowedTools":{"type":"array","description":"Specific tools to allow from the MCP server (all if omitted)","items":{"type":"string"}}}},"tool":{"type":"object","description":"Reference to a Celigo Tool resource.\n","properties":{"_toolId":{"type":"string","format":"objectId","description":"Reference to the Tool resource"},"overrides":{"type":"object","description":"Per-agent overrides for the tool's internal resources"}}}}}}}},"litellm":{"type":"object","description":"LiteLLM proxy configuration. Used when `provider` is \"gemini\".\n\nLiteLLM provides a unified interface to multiple AI providers.\nGemini-specific settings are in `_overrides.gemini`.\n","properties":{"model":{"type":"string","description":"LiteLLM model identifier"},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature"},"maxCompletionTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the response"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"seed":{"type":"number","description":"Random seed for reproducible outputs"},"responseFormat":{"type":"object","description":"Output format configuration","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text"},"name":{"type":"string"},"strict":{"type":"boolean","default":false},"jsonSchema":{"type":"object","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalProperties":{"type":"boolean"}}}}},"_overrides":{"type":"object","description":"Provider-specific overrides","properties":{"gemini":{"type":"object","description":"Gemini-specific configuration overrides.\n","properties":{"systemInstruction":{"type":"string","maxLength":51200,"description":"System instruction for Gemini models. Equivalent to OpenAI's `instructions`.\nMaximum 50 KB.\n"},"tools":{"type":"array","description":"Gemini-specific tools","items":{"type":"object","properties":{"type":{"type":"string","enum":["googleSearch","urlContext","fileSearch","mcp","tool"],"description":"Type of Gemini tool.\n\n- **googleSearch**: Google Search grounding\n- **urlContext**: URL content retrieval\n- **fileSearch**: Search uploaded files\n- **mcp**: Connect to an MCP server\n- **tool**: Reference a Celigo Tool resource\n"},"googleSearch":{"type":"object","description":"Google Search configuration (empty object to enable)"},"urlContext":{"type":"object","description":"URL context configuration (empty object to enable)"},"fileSearch":{"type":"object","properties":{"fileSearchStoreNames":{"type":"array","items":{"type":"string"}}}},"mcp":{"type":"object","properties":{"_mcpConnectionId":{"type":"string","format":"objectId"},"allowedTools":{"type":"array","items":{"type":"string"}}}},"tool":{"type":"object","properties":{"_toolId":{"type":"string","format":"objectId"},"overrides":{"type":"object"}}}}}},"responseModalities":{"type":"array","description":"Response output modalities","items":{"type":"string","enum":["text","image"]},"default":["text"]},"topK":{"type":"number","description":"Top-K sampling parameter for Gemini"},"thinkingConfig":{"type":"object","description":"Controls Gemini's extended thinking capabilities","properties":{"includeThoughts":{"type":"boolean","description":"Whether to include thinking steps in the response"},"thinkingBudget":{"type":"number","minimum":100,"maximum":4000,"description":"Maximum tokens allocated for thinking"},"thinkingLevel":{"type":"string","enum":["minimal","low","medium","high"]}}},"imageConfig":{"type":"object","description":"Gemini image generation configuration","properties":{"aspectRatio":{"type":"string","enum":["1:1","2:3","3:2","3:4","4:3","4:5","5:4","9:16","16:9","21:9"]},"imageSize":{"type":"string","enum":["1K","2K","4k"]}}},"mediaResolution":{"type":"string","enum":["low","medium","high"],"description":"Resolution for media inputs (images, video)"}}}}}}}}},"Guardrail":{"type":"object","description":"Configuration for GuardrailImport adaptor type.\n\nGuardrails are safety and compliance checks that can be applied to data\nflowing through integrations. Three types of guardrails are supported:\n\n- **ai_agent**: Uses an AI model to evaluate data against custom instructions\n- **pii**: Detects and optionally masks personally identifiable information\n- **moderation**: Checks content against moderation categories (e.g. `hate`, `violence`, `harassment`, `sexual`, `self_harm`, `illicit`). Use category names exactly as listed in `moderation.categories` — e.g. `hate` (not \"hate speech\"), `violence` (not \"violent content\").\n\nGuardrail imports do not require a `_connectionId` (unless using BYOK for `ai_agent` type).\n","properties":{"type":{"type":"string","enum":["ai_agent","pii","moderation"],"description":"The type of guardrail to apply.\n\n- **ai_agent**: Evaluate data using an AI model with custom instructions.\n  Requires the `aiAgent` sub-configuration.\n- **pii**: Detect personally identifiable information in data.\n  Requires the `pii` sub-configuration with at least one entity type.\n- **moderation**: Check content for harmful categories.\n  Requires the `moderation` sub-configuration with at least one category.\n"},"confidenceThreshold":{"type":"number","minimum":0,"maximum":1,"default":0.7,"description":"Confidence threshold for guardrail detection (0 to 1).\n\nOnly detections with confidence at or above this threshold will be flagged.\nLower values catch more potential issues but may increase false positives.\n"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"pii":{"type":"object","description":"Configuration for PII (Personally Identifiable Information) detection.\n\nRequired when `guardrail.type` is \"pii\". At least one entity type must be specified.\n","properties":{"entities":{"type":"array","description":"PII entity types to detect in the data.\n\nAt least one entity must be specified when using PII guardrails.\n","items":{"type":"string","enum":["credit_card_number","card_security_code_cvv_cvc","cryptocurrency_wallet_address","date_and_time","email_address","iban_code","bic_swift_bank_identifier_code","ip_address","location","medical_license_number","national_registration_number","persons_name","phone_number","url","us_bank_account_number","us_drivers_license","us_itin","us_passport_number","us_social_security_number","uk_nhs_number","uk_national_insurance_number","spanish_nif","spanish_nie","italian_fiscal_code","italian_drivers_license","italian_vat_code","italian_passport","italian_identity_card","polish_pesel","finnish_personal_identity_code","singapore_nric_fin","singapore_uen","australian_abn","australian_acn","australian_tfn","australian_medicare","indian_pan","indian_aadhaar","indian_vehicle_registration","indian_voter_id","indian_passport","korean_resident_registration_number"]}},"mask":{"type":"boolean","default":false,"description":"Whether to mask detected PII values in the output.\n\nWhen true, detected PII is replaced with masked values.\nWhen false, PII is only flagged without modification.\n"}}},"moderation":{"type":"object","description":"Configuration for content moderation.\n\nRequired when `guardrail.type` is \"moderation\". At least one category must be specified.\n","properties":{"categories":{"type":"array","description":"Content moderation categories to check for.\n\nAt least one category must be specified when using moderation guardrails.\n","items":{"type":"string","enum":["sexual","sexual_minors","hate","hate_threatening","harassment","harassment_threatening","self_harm","self_harm_intent","self_harm_instructions","violence","violence_graphic","illicit","illicit_violent"]}}}}},"required":["type"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this import."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this import was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this import."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this import is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this import expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this import."}}}]},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"422-unprocessable-entity":{"description":"Unprocessable entity. The request was well-formed but was unable to be followed due to semantic errors.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/imports":{"post":{"summary":"Create an import","description":"Creates a new import configuration that can be used to send data to applications\nor external destinations.\n","operationId":"createImport","tags":["Imports"],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Request"}}}},"responses":{"201":{"description":"Import created successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"422":{"$ref":"#/components/responses/422-unprocessable-entity"}}}}}}
````

## Get an import

> Returns the complete configuration of a specific import.<br>

````json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this import."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this import was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this import."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this import is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this import expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this import."}}}]},"Request":{"type":"object","description":"Configuration for import","properties":{"_connectionId":{"type":"string","format":"objectId","description":"A unique identifier representing the specific connection instance within the system. This ID is used to track, manage, and reference the connection throughout its lifecycle, ensuring accurate routing and association of messages or actions to the correct connection context.\n\n**Field behavior**\n- Uniquely identifies a single connection session.\n- Remains consistent for the duration of the connection.\n- Used to correlate requests, responses, and events related to the connection.\n- Typically generated by the system upon connection establishment.\n\n**Implementation guidance**\n- Ensure the connection ID is globally unique within the scope of the application.\n- Use a format that supports easy storage and retrieval, such as UUID or a similar unique string.\n- Avoid exposing sensitive information within the connection ID.\n- Validate the connection ID format on input to prevent injection or misuse.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"conn-9876543210\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- The connection ID should not be reused for different connections.\n- It is critical for maintaining session integrity and security.\n- Should be treated as an opaque value by clients; its internal structure should not be assumed.\n\n**Dependency chain**\n- Generated during connection initialization.\n- Used by session management and message routing components.\n- Referenced in logs, audits, and debugging tools.\n\n**Technical details**\n- Typically implemented as a string data type.\n- May be generated using UUID libraries or custom algorithms.\n- Stored in connection metadata and passed in API headers or payloads as needed."},"_integrationId":{"type":"string","format":"objectId","description":"The unique identifier for the integration instance associated with the current operation or resource. This ID is used to reference and manage the specific integration within the system, ensuring that actions and data are correctly linked to the appropriate integration context.\n\n**Field behavior**\n- Uniquely identifies an integration instance.\n- Used to associate requests or resources with a specific integration.\n- Typically immutable once set to maintain consistent linkage.\n- May be required for operations involving integration-specific data or actions.\n\n**Implementation guidance**\n- Ensure the ID is generated in a globally unique manner to avoid collisions.\n- Validate the format and existence of the integration ID before processing requests.\n- Use this ID to fetch or update integration-related configurations or data.\n- Secure the ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"intg_1234567890abcdef\"\n- \"integration-987654321\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- This ID should not be confused with user IDs or other resource identifiers.\n- It is critical for maintaining the integrity of integration-related workflows.\n- Changes to this ID after creation can lead to inconsistencies or errors.\n\n**Dependency chain**\n- Dependent on the integration creation or registration process.\n- Used by downstream services or components that interact with the integration.\n- May be referenced in logs, audit trails, and monitoring systems.\n\n**Technical details**\n- Typically represented as a string.\n- May follow a specific pattern or prefix to denote integration IDs.\n- Stored in databases or configuration files linked to integration metadata.\n- Used as a key in API calls, database queries, and internal processing logic."},"_connectorId":{"type":"string","format":"objectId","description":"The unique identifier for the connector associated with the resource or operation. This ID is used to reference and manage the specific connector within the system, enabling integration and communication between different components or services.\n\n**Field behavior**\n- Uniquely identifies a connector instance.\n- Used to link resources or operations to a specific connector.\n- Typically immutable once assigned.\n- Required for operations that involve connector-specific actions.\n\n**Implementation guidance**\n- Ensure the connector ID is globally unique within the system.\n- Validate the format and existence of the connector ID before processing.\n- Use consistent naming or ID conventions to facilitate management.\n- Secure the connector ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"conn-12345\"\n- \"connector_abcde\"\n- \"uuid-550e8400-e29b-41d4-a716-446655440000\"\n\n**Important notes**\n- The connector ID must correspond to a valid and active connector.\n- Changes to the connector ID may affect linked resources or workflows.\n- Connector IDs are often generated by the system and should not be manually altered.\n\n**Dependency chain**\n- Dependent on the existence of a connector registry or management system.\n- Used by components that require connector-specific configuration or data.\n- May be referenced by authentication, routing, or integration modules.\n\n**Technical details**\n- Typically represented as a string.\n- May follow UUID, GUID, or custom ID formats.\n- Stored and transmitted as part of API requests and responses.\n- Should be indexed in databases for efficient lookup."},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this import's operations.\n\nThis field determines:\n- Which connection types are compatible with this import\n- Which API endpoints and protocols will be used\n- Which import-specific configuration objects must be provided\n- The available features and capabilities of the import\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPImport\" for generic REST/SOAP APIs\n- \"SalesforceImport\" for Salesforce-specific operations\n- \"NetSuiteDistributedImport\" for NetSuite-specific operations\n- \"FTPImport\" for file transfers via FTP/SFTP\n\nWhen creating an import, this field must be set correctly and cannot be changed afterward\nwithout creating a new import resource.\n\n**Important notes**\n- When using a specific adapter type (e.g., \"SalesforceImport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n- HTTPImport is the default adapter type for all REST/SOAP APIs and should be used unless there is a specialized adapter type for the specific API (such as SalesforceImport for Salesforce and NetSuiteDistributedImport for NetSuite).\n- WrapperImport is used for using a custom adapter built outside of Celigo, and is vary rarely used.\n- RDBMSImport is used for database imports that are built into Celigo such as SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, etc. When using RDBMSImport, populate the \"rdbms\" configuration object.\n- JDBCImport is ONLY for generic JDBC connections that are not one of the built-in RDBMS types. Do NOT use JDBCImport for Snowflake, SQL Server, MySQL, PostgreSQL, or Oracle — use RDBMSImport instead. When using JDBCImport, populate the \"jdbc\" configuration object.\n- NetSuiteDistributedImport is the default for all NetSuite imports. Always use NetSuiteDistributedImport when importing into NetSuite.\n- NetSuiteImport is a legacy adaptor for NetSuite accounts without the Celigo SuiteApp 2.0. Do NOT use NetSuiteImport unless explicitly requested.\n\nThe connection type must be compatible with the adaptorType specified for this import.\nFor example, if adaptorType is \"HTTPImport\", _connectionId must reference a connection with type \"http\".\n","enum":["HTTPImport","FTPImport","AS2Import","S3Import","NetSuiteImport","NetSuiteDistributedImport","SalesforceImport","JDBCImport","RDBMSImport","MongodbImport","DynamodbImport","WrapperImport","AiAgentImport","GuardrailImport","FileSystemImport","ToolImport"]},"externalId":{"type":"string","description":"A unique identifier assigned to an entity by an external system or source, used to reference or correlate the entity across different platforms or services. This ID facilitates integration, synchronization, and data exchange between systems by providing a consistent reference point.\n\n**Field behavior**\n- Must be unique within the context of the external system.\n- Typically immutable once assigned to ensure consistent referencing.\n- Used to link or map records between internal and external systems.\n- May be optional or required depending on integration needs.\n\n**Implementation guidance**\n- Ensure the externalId format aligns with the external system’s specifications.\n- Validate uniqueness to prevent conflicts or data mismatches.\n- Store and transmit the externalId securely to maintain data integrity.\n- Consider indexing this field for efficient lookup and synchronization.\n\n**Examples**\n- \"12345-abcde-67890\"\n- \"EXT-2023-0001\"\n- \"user_987654321\"\n- \"SKU-XYZ-1001\"\n\n**Important notes**\n- The externalId is distinct from internal system identifiers.\n- Changes to the externalId can disrupt data synchronization.\n- Should be treated as a string to accommodate various formats.\n- May include alphanumeric characters, dashes, or underscores.\n\n**Dependency chain**\n- Often linked to integration or synchronization modules.\n- May depend on external system availability and consistency.\n- Used in API calls that involve cross-system data referencing.\n\n**Technical details**\n- Data type: string.\n- Maximum length may vary based on external system constraints.\n- Should support UTF-8 encoding to handle diverse character sets.\n- May require validation against a specific pattern or format."},"as2":{"$ref":"#/components/schemas/As2"},"dynamodb":{"$ref":"#/components/schemas/Dynamodb"},"http":{"$ref":"#/components/schemas/Http"},"ftp":{"$ref":"#/components/schemas/Ftp"},"jdbc":{"$ref":"#/components/schemas/Jdbc"},"mongodb":{"$ref":"#/components/schemas/Mongodb"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"netsuite_da":{"$ref":"#/components/schemas/NetsuiteDistributed"},"rdbms":{"$ref":"#/components/schemas/Rdbms"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"file":{"$ref":"#/components/schemas/File"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"guardrail":{"$ref":"#/components/schemas/Guardrail"},"name":{"type":"string","description":"The name property represents the identifier or title assigned to an entity, object, or resource within the system. It is typically a human-readable string that uniquely distinguishes the item from others in the same context. This property is essential for referencing, searching, and displaying the entity in user interfaces and API responses.\n\n**Field behavior**\n- Must be a non-empty string.\n- Should be unique within its scope to avoid ambiguity.\n- Often used as a primary label in UI components and logs.\n- May have length constraints depending on system requirements.\n- Typically case-sensitive or case-insensitive based on implementation.\n\n**Implementation guidance**\n- Validate the input to ensure it meets length and character requirements.\n- Enforce uniqueness if required by the application context.\n- Sanitize input to prevent injection attacks or invalid characters.\n- Support internationalization by allowing Unicode characters if applicable.\n- Provide clear error messages when validation fails.\n\n**Examples**\n- \"John Doe\"\n- \"Invoice_2024_001\"\n- \"MainServer\"\n- \"Project Phoenix\"\n- \"user@example.com\"\n\n**Important notes**\n- Avoid using reserved keywords or special characters that may interfere with system processing.\n- Consider trimming whitespace from the beginning and end of the string.\n- The name should be meaningful and descriptive to improve usability.\n- Changes to the name might affect references or links in other parts of the system.\n\n**Dependency chain**\n- May be referenced by other properties such as IDs, URLs, or display labels.\n- Could be linked to permissions or access control mechanisms.\n- Often used in search and filter operations within the API.\n\n**Technical details**\n- Data type: string.\n- Encoding: UTF-8 recommended.\n- Maximum length: typically defined by system constraints (e.g., 255 characters).\n- Validation rules: regex patterns may be applied to restrict allowed characters.\n- Storage considerations: indexed for fast lookup if used frequently in queries."},"description":{"type":"string","description":"A detailed textual explanation that describes the purpose, characteristics, or context of the associated entity or item. This field is intended to provide users with clear and comprehensive information to better understand the subject it represents.\n**Field behavior**\n- Accepts plain text or formatted text depending on implementation.\n- Should be concise yet informative, avoiding overly technical jargon unless necessary.\n- May support multi-line input to allow detailed explanations.\n- Typically optional but recommended for clarity.\n\n**Implementation guidance**\n- Ensure the description is user-friendly and accessible to the target audience.\n- Avoid including sensitive or confidential information.\n- Use consistent terminology aligned with the overall documentation or product language.\n- Consider supporting markdown or rich text formatting if applicable.\n\n**Examples**\n- \"This API endpoint retrieves user profile information including name, email, and preferences.\"\n- \"A unique identifier assigned to each transaction for tracking purposes.\"\n- \"Specifies the date and time when the event occurred, formatted in ISO 8601.\"\n\n**Important notes**\n- Keep descriptions up to date with any changes in functionality or behavior.\n- Avoid redundancy with other fields; focus on clarifying the entity’s role or usage.\n- Descriptions should not contain executable code or commands.\n\n**Dependency chain**\n- Often linked to the entity or property it describes.\n- May influence user understanding and correct usage of the associated field.\n- Can be referenced in user guides, API documentation, or UI tooltips.\n\n**Technical details**\n- Typically stored as a string data type.\n- May have length constraints depending on system limitations.\n- Encoding should support UTF-8 to accommodate international characters.\n- Should be sanitized to prevent injection attacks if rendered in UI contexts.","maxLength":10240},"unencrypted":{"type":"object","description":"Indicates whether the data or content is stored or transmitted without encryption, meaning it is in plain text and not protected by any cryptographic methods. This flag helps determine if additional security measures are necessary to safeguard sensitive information.\n\n**Field behavior**\n- Represents a boolean state where `true` means data is unencrypted and `false` means data is encrypted.\n- Used to identify if the data requires encryption before storage or transmission.\n- May influence processing logic related to data security and compliance.\n\n**Implementation guidance**\n- Ensure this field is explicitly set to reflect the actual encryption status of the data.\n- Use this flag to trigger encryption routines or security checks when data is unencrypted.\n- Validate the field to prevent false positives that could lead to security vulnerabilities.\n\n**Examples**\n- `true` indicating a password stored in plain text (not recommended).\n- `false` indicating a file that has been encrypted before upload.\n- `true` for a message sent over an unsecured channel.\n\n**Important notes**\n- Storing or transmitting data with `unencrypted` set to `true` can expose sensitive information to unauthorized access.\n- This field does not perform encryption itself; it only signals the encryption status.\n- Proper handling of unencrypted data is critical for compliance with data protection regulations.\n\n**Dependency chain**\n- Often used in conjunction with fields specifying encryption algorithms or keys.\n- May affect downstream processes such as logging, auditing, or alerting mechanisms.\n- Related to security policies and access control configurations.\n\n**Technical details**\n- Typically represented as a boolean value.\n- Should be checked before performing operations that assume data confidentiality.\n- May be integrated with security frameworks to enforce encryption standards."},"sampleData":{"type":"object","description":"An array or collection of sample data entries used to demonstrate or test the functionality of the API or application. This data serves as example input or output to help users understand the expected format, structure, and content. It can include various data types depending on the context, such as strings, numbers, objects, or nested arrays.\n\n**Field behavior**\n- Contains example data that mimics real-world usage scenarios.\n- Can be used for testing, validation, or demonstration purposes.\n- May be optional or required depending on the API specification.\n- Should be representative of typical data to ensure meaningful examples.\n\n**Implementation guidance**\n- Populate with realistic and relevant sample entries.\n- Ensure data types and structures align with the API schema.\n- Avoid sensitive or personally identifiable information.\n- Update regularly to reflect any changes in the API data model.\n\n**Examples**\n- A list of user profiles with names, emails, and roles.\n- Sample product entries with IDs, descriptions, and prices.\n- Example sensor readings with timestamps and values.\n- Mock transaction records with amounts and statuses.\n\n**Important notes**\n- Sample data is for illustrative purposes and may not reflect actual production data.\n- Should not be used for live processing or decision-making.\n- Ensure compliance with data privacy and security standards when creating samples.\n\n**Dependency chain**\n- Dependent on the data model definitions within the API.\n- May influence or be influenced by validation rules and schema constraints.\n- Often linked to documentation and testing frameworks.\n\n**Technical details**\n- Typically formatted as JSON arrays or objects.\n- Can include nested structures to represent complex data.\n- Size and complexity should be balanced to optimize performance and clarity.\n- May be embedded directly in API responses or provided as separate resources."},"distributed":{"type":"boolean","description":"Indicates whether the import is using a distributed adaptor, such as a NetSuite Distributed Import.\n**Field behavior**\n- Accepts boolean values: true or false.\n- When true, the import is using a distributed adaptor such as a NetSuite Distributed Import.\n- When false, the import is not using a distributed adaptor.\n\n**Examples**\n- true: When the adaptorType is NetSuiteDistributedImport\n- false: In all other cases"},"maxAttempts":{"type":"number","description":"Specifies the maximum number of attempts allowed for a particular operation or action before it is considered failed or terminated. This property helps control retry logic and prevents infinite loops or excessive retries in processes such as authentication, data submission, or task execution.  \n**Field behavior:**  \n- Defines the upper limit on retry attempts for an operation.  \n- Once the maximum attempts are reached, no further retries are made.  \n- Can be used to trigger fallback mechanisms or error handling after exceeding attempts.  \n**Implementation** GUIDANCE:  \n- Set to a positive integer value representing the allowed retry count.  \n- Should be configured based on the operation's criticality and expected failure rates.  \n- Consider exponential backoff or delay strategies in conjunction with maxAttempts.  \n- Validate input to ensure it is a non-negative integer.  \n**Examples:**  \n- 3 (allowing up to three retry attempts)  \n- 5 (permitting five attempts before failure)  \n- 0 (disabling retries entirely)  \n**Important notes:**  \n- Setting this value too high may cause unnecessary resource consumption or delays.  \n- Setting it too low may result in premature failure without sufficient retry opportunities.  \n- Should be used in combination with timeout settings for robust error handling.  \n**Dependency chain:**  \n- Often used alongside properties like retryDelay, timeout, or backoffStrategy.  \n- May influence or be influenced by error handling or circuit breaker configurations.  \n**Technical details:**  \n- Typically represented as an integer data type.  \n- Enforced by the application logic or middleware managing retries.  \n- May be configurable at runtime or deployment time depending on system design."},"ignoreExisting":{"type":"boolean","description":"**CRITICAL FIELD:** Controls whether the import should skip records that already exist in the target system rather than attempting to update them or throwing errors.\n\nWhen set to true, the import will:\n- Check if each record already exists in the target system (using a lookup configured in the import, or if the record coming into the import has a value populated in the field configured in the ignoreExtract property)\n- Skip processing for records that are found to already exist\n- Continue processing only records that don't exist (new records)\n- This is for CREATE operations where you want to avoid duplicates\n\nWhen set to false or undefined (default):\n- All records are processed normally in the creation process (if applicable)\n- Existing records may be duplicated or may cause errors depending on the API\n- No special existence checking is performed\n\n**When to set this to true (MUST detect these PATTERNS)**\n\n**ALWAYS set ignoreExisting to true** when the user's prompt includes ANY of these patterns:\n\n**Primary Patterns (very common)**\n- \"ignore existing\" / \"ignoring existing\" / \"ignore any existing\"\n- \"skip existing\" / \"skipping existing\" / \"skip any existing\"\n- \"while ignoring existing\" / \"while skipping existing\"\n- \"skip duplicates\" / \"avoid duplicates\" / \"ignore duplicates\"\n\n**Secondary Patterns**\n- \"only create new\" / \"only add new\" / \"create new only\"\n- \"don't update existing\" / \"avoid updating existing\" / \"without updating\"\n- \"create if doesn't exist\" / \"add if doesn't exist\"\n- \"insert only\" / \"create only\"\n- \"skip if already exists\" / \"ignore if exists\"\n- \"don't process existing\" / \"bypass existing\"\n\n**Context Clues**\n- User says \"create\" or \"insert\" AND mentions checking/matching against existing data\n- User wants to add records but NOT modify what's already there\n- User is concerned about duplicate prevention during creation\n\n**Examples**\n\n**Should set ignoreExisting: true**\n- \"Create customer records in Shopify while ignoring existing customers\" → ignoreExisting: true\n- \"Create vendor records while ignoring existing vendors\" → ignoreExisting: true\n- \"Import new products only, skip existing ones\" → ignoreExisting: true\n- \"Add orders to the system, ignore any that already exist\" → ignoreExisting: true\n- \"Create accounts but don't update existing ones\" → ignoreExisting: true\n- \"Insert new contacts, skip duplicates\" → ignoreExisting: true\n- \"Create records, skipping any that match existing data\" → ignoreExisting: true\n\n**Should not set ignoreExisting: true**\n- \"Update existing customer records\" → ignoreExisting: false (update operation)\n- \"Create or update records\" → ignoreExisting: false (upsert operation)\n- \"Sync all records\" → ignoreExisting: false (sync/upsert operation)\n\n**Important**\n- This field is typically used with INSERT operations\n- Common pattern: \"Create X while ignoring existing X\" → MUST set to true\n\n**When to leave this false/undefined**\n\nLeave this false or undefined when:\n- The prompt doesn't mention skipping or ignoring existing records\n- The import is only performing updates (no inserts)\n- The prompt says \"sync\", \"update\", or \"upsert\"\n\n**Important notes**\n- This flag requires a properly configured lookup to identify existing records, or have an ignoreExtract field configured to identify the field that is used to determine if the record already exists.\n- The lookup typically checks a unique identifier field (id, email, external_id, etc.)\n- When true, existing records are silently skipped (not updated, not errored)\n- Use this for insert-only or create-only operations\n- Do NOT use this if the user wants to update existing records\n\n**Technical details**\n- Works in conjunction with lookup configurations to determine if a record exists or has an ignoreExtract field configured to determine if the record already exists.\n- The lookup queries the target system before attempting to create the record\n- If the lookup returns a match, the record is skipped\n- If the lookup returns no match, the record is processed normally"},"ignoreMissing":{"type":"boolean","description":"Indicates whether the system should ignore missing or non-existent resources during processing. When set to true, the operation will proceed without error even if some referenced items are not found; when false, the absence of required resources will cause the operation to fail or return an error.  \n**Field behavior:**  \n- Controls error handling related to missing resources.  \n- Allows operations to continue gracefully when some data is unavailable.  \n- Affects the robustness and fault tolerance of the process.  \n**Implementation guidance:**  \n- Use true to enable lenient processing where missing data is acceptable.  \n- Use false to enforce strict validation and fail fast on missing resources.  \n- Ensure that downstream logic can handle partial results if ignoring missing items.  \n**Examples:**  \n- ignoreMissing: true — skips over missing files during batch processing.  \n- ignoreMissing: false — throws an error if a referenced database entry is not found.  \n**Important notes:**  \n- Setting this to true may lead to incomplete results if missing data is critical.  \n- Default behavior should be clearly documented to avoid unexpected failures.  \n- Consider logging or reporting missing items even when ignoring them.  \n**Dependency** CHAIN:  \n- Often used in conjunction with resource identifiers or references.  \n- May impact validation and error reporting modules.  \n**Technical details:**  \n- Typically a boolean flag.  \n- Influences control flow and exception handling mechanisms.  \n- Should be clearly defined in API contracts to manage client expectations."},"idLockTemplate":{"type":"string","description":"The unique identifier for the lock template used to define the configuration and behavior of a lock within the system. This ID links the lock instance to a predefined template that specifies its properties, access rules, and operational parameters.\n\n**Field behavior**\n- Uniquely identifies a lock template within the system.\n- Used to associate a lock instance with its configuration template.\n- Typically immutable once assigned to ensure consistent behavior.\n\n**Implementation guidance**\n- Must be a valid identifier corresponding to an existing lock template.\n- Should be validated against the list of available templates before assignment.\n- Ensure uniqueness to prevent conflicts between different lock configurations.\n\n**Examples**\n- \"template-12345\"\n- \"lockTemplateA1B2C3\"\n- \"LT-987654321\"\n\n**Important notes**\n- Changing the template ID after lock creation may lead to inconsistent lock behavior.\n- The template defines critical lock parameters; ensure the correct template is referenced.\n- This field is essential for lock provisioning and management workflows.\n\n**Dependency chain**\n- Depends on the existence of predefined lock templates in the system.\n- Used by lock management and access control modules to apply configurations.\n\n**Technical details**\n- Typically represented as a string or UUID.\n- Should conform to the system’s identifier format standards.\n- Indexed for efficient lookup and retrieval in the database."},"dataURITemplate":{"type":"string","description":"A template string used to construct data URIs dynamically by embedding variable placeholders that are replaced with actual values at runtime. This template facilitates the generation of standardized data URIs for accessing or referencing resources within the system.\n\n**Field behavior**\n- Accepts a string containing placeholders for variables.\n- Placeholders are replaced with corresponding runtime values to form a complete data URI.\n- Enables dynamic and flexible URI generation based on context or input parameters.\n- Supports standard URI encoding where necessary to ensure valid URI formation.\n\n**Implementation guidance**\n- Define clear syntax for variable placeholders within the template (e.g., using curly braces or another delimiter).\n- Ensure that all required variables are provided at runtime to avoid incomplete URIs.\n- Validate the final constructed URI to confirm it adheres to URI standards.\n- Consider supporting optional variables with default values to increase template flexibility.\n\n**Examples**\n- \"data:text/plain;base64,{base64EncodedData}\"\n- \"data:{mimeType};charset={charset},{data}\"\n- \"data:image/{imageFormat};base64,{encodedImageData}\"\n\n**Important notes**\n- The template must produce a valid data URI after variable substitution.\n- Proper encoding of variable values is critical to prevent malformed URIs.\n- This property is essential for scenarios requiring inline resource embedding or data transmission via URIs.\n- Misconfiguration or missing variables can lead to invalid or unusable URIs.\n\n**Dependency chain**\n- Relies on the availability of runtime variables corresponding to placeholders.\n- May depend on encoding utilities to prepare variable values.\n- Often used in conjunction with data processing or resource generation components.\n\n**Technical details**\n- Typically a UTF-8 encoded string.\n- Supports standard URI schemes and encoding rules.\n- Variable placeholders should be clearly defined and consistently parsed.\n- May integrate with templating engines or string interpolation mechanisms."},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"blobKeyPath":{"type":"string","description":"Specifies the file system path to the blob key, which uniquely identifies the binary large object (blob) within the storage system. This path is used to locate and access the blob data efficiently during processing or retrieval operations.\n**Field behavior**\n- Represents a string path pointing to the location of the blob key.\n- Used as a reference to access the associated blob data.\n- Typically immutable once set to ensure consistent access.\n- May be required for operations involving blob retrieval or manipulation.\n**Implementation guidance**\n- Ensure the path is valid and correctly formatted according to the storage system's conventions.\n- Validate the existence of the blob key at the specified path before processing.\n- Handle cases where the path may be null or empty gracefully.\n- Secure the path to prevent unauthorized access to blob data.\n**Examples**\n- \"/data/blobs/abc123def456\"\n- \"blobstore/keys/2024/06/blobkey789\"\n- \"s3://bucket-name/blob-keys/key123\"\n**Important notes**\n- The blobKeyPath must correspond to an existing blob key in the storage backend.\n- Incorrect or malformed paths can lead to failures in blob retrieval.\n- Access permissions to the path must be properly configured.\n- The format of the path may vary depending on the underlying storage technology.\n**Dependency chain**\n- Dependent on the storage system's directory or key structure.\n- Used by blob retrieval and processing components.\n- May be linked to metadata describing the blob.\n**Technical details**\n- Typically represented as a UTF-8 encoded string.\n- May include hierarchical directory structures or URI schemes.\n- Should be sanitized to prevent injection or path traversal vulnerabilities.\n- Often used as a lookup key in blob storage APIs or databases."},"blob":{"type":"boolean","description":"The binary large object (blob) representing the raw data content to be processed, stored, or transmitted. This field typically contains encoded or serialized data such as images, files, or other multimedia content in a compact binary format.\n\n**Field behavior**\n- Holds the actual binary data payload.\n- Can represent various data types including images, documents, or other file formats.\n- May be encoded in formats like Base64 for safe transmission over text-based protocols.\n- Treated as opaque data by the API, with no interpretation unless specified.\n\n**Implementation guidance**\n- Ensure the blob data is properly encoded (e.g., Base64) if the transport medium requires text-safe encoding.\n- Validate the size and format of the blob to meet API constraints.\n- Handle decoding and encoding consistently on both client and server sides.\n- Use streaming or chunking for very large blobs to optimize performance.\n\n**Examples**\n- A Base64-encoded JPEG image file.\n- A serialized JSON object converted into a binary format.\n- A PDF document encoded as a binary blob.\n- An audio file represented as a binary stream.\n\n**Important notes**\n- The blob content is typically opaque and should not be altered during transmission.\n- Size limits may apply depending on API or transport constraints.\n- Proper encoding and decoding are critical to avoid data corruption.\n- Security considerations should be taken into account when handling binary data.\n\n**Dependency chain**\n- May depend on encoding schemes (e.g., Base64) for safe transmission.\n- Often used in conjunction with metadata fields describing the blob type or format.\n- Requires appropriate content-type headers or descriptors for correct interpretation.\n\n**Technical details**\n- Usually represented as a byte array or Base64-encoded string in JSON APIs.\n- May require MIME type specification to indicate the nature of the data.\n- Handling may involve buffer management and memory considerations.\n- Supports binary-safe transport mechanisms to preserve data integrity."},"assistant":{"type":"string","description":"Specifies the configuration and behavior settings for the AI assistant that will interact with the user. This property defines how the assistant responds, including its personality, knowledge scope, response style, and any special instructions or constraints that guide its operation.\n\n**Field behavior**\n- Determines the assistant's tone, style, and manner of communication.\n- Controls the knowledge base or data sources the assistant can access.\n- Enables customization of the assistant's capabilities and limitations.\n- May include parameters for language, verbosity, and response format.\n\n**Implementation guidance**\n- Ensure the assistant configuration aligns with the intended user experience.\n- Validate that all required sub-properties within the assistant configuration are correctly set.\n- Support dynamic updates to the assistant settings to adapt to different contexts or user needs.\n- Provide defaults for unspecified settings to maintain consistent behavior.\n\n**Examples**\n- Setting the assistant to a formal tone with technical expertise.\n- Configuring the assistant to provide concise answers with references.\n- Defining the assistant to operate within a specific domain, such as healthcare or finance.\n- Enabling multi-language support for the assistant responses.\n\n**Important notes**\n- Changes to the assistant property can significantly affect user interaction quality.\n- Properly securing and validating assistant configurations is critical to prevent misuse.\n- The assistant's behavior should comply with ethical guidelines and privacy regulations.\n- Overly restrictive settings may limit the assistant's usefulness, while too broad settings may reduce relevance.\n\n**Dependency chain**\n- May depend on user preferences or session context.\n- Interacts with the underlying AI model and its capabilities.\n- Influences downstream processing of user inputs and outputs.\n- Can be linked to external knowledge bases or APIs for enhanced responses.\n\n**Technical details**\n- Typically structured as an object containing multiple nested properties.\n- May include fields such as personality traits, knowledge cutoff dates, and response constraints.\n- Supports serialization and deserialization for API communication.\n- Requires compatibility with the AI platform's configuration schema."},"deleteAfterImport":{"type":"boolean","description":"Indicates whether the source file should be deleted automatically after the import process completes successfully. This boolean flag helps manage storage by removing files that are no longer needed once their data has been ingested.\n\n**Field behavior**\n- When set to true, the system deletes the source file immediately after a successful import.\n- When set to false or omitted, the source file remains intact after import.\n- Deletion only occurs if the import process completes without errors.\n\n**Implementation guidance**\n- Validate that the import was successful before deleting the file.\n- Ensure proper permissions are in place to delete the file.\n- Consider logging deletion actions for audit purposes.\n- Provide clear user documentation about the implications of enabling this flag.\n\n**Examples**\n- true: The file \"data.csv\" will be removed after import.\n- false: The file \"data.csv\" will remain after import for manual review or backup.\n\n**Important notes**\n- Enabling this flag may result in data loss if the file is needed for re-import or troubleshooting.\n- Use with caution in environments where file retention policies are strict.\n- This flag does not affect files that fail to import.\n\n**Dependency chain**\n- Depends on the successful completion of the import operation.\n- May interact with file system permissions and cleanup routines.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Triggers a file deletion API or system call post-import.\n- Should handle exceptions gracefully to avoid orphaned files or unintended deletions."},"assistantMetadata":{"type":"object","description":"Metadata containing additional information about the assistant, such as configuration details, versioning, capabilities, and any custom attributes that help in managing or identifying the assistant's behavior and context. This metadata supports enhanced control, monitoring, and customization of the assistant's interactions.\n\n**Field behavior**\n- Holds supplementary data that describes or configures the assistant.\n- Can include static or dynamic information relevant to the assistant's operation.\n- Used to influence or inform the assistant's responses and functionality.\n- May be updated or extended to reflect changes in the assistant's capabilities or environment.\n\n**Implementation guidance**\n- Structure metadata in a clear, consistent format (e.g., key-value pairs).\n- Ensure sensitive information is not exposed through metadata.\n- Use metadata to enable feature toggles, version tracking, or context awareness.\n- Validate metadata content to maintain integrity and compatibility.\n\n**Examples**\n- Version number of the assistant (e.g., \"v1.2.3\").\n- Supported languages or locales.\n- Enabled features or modules.\n- Custom tags indicating the assistant's domain or purpose.\n\n**Important notes**\n- Metadata should not contain user-specific or sensitive personal data.\n- Keep metadata concise to avoid performance overhead.\n- Changes in metadata may affect assistant behavior; manage updates carefully.\n- Metadata is primarily for internal use and may not be exposed to end users.\n\n**Dependency chain**\n- May depend on assistant configuration settings.\n- Can influence or be referenced by response generation logic.\n- Interacts with monitoring and analytics systems for tracking.\n\n**Technical details**\n- Typically represented as a JSON object or similar structured data.\n- Should be easily extensible to accommodate future attributes.\n- Must be compatible with the overall assistant API schema.\n- Should support serialization and deserialization without data loss."},"useTechAdaptorForm":{"type":"boolean","description":"Indicates whether the technical adaptor form should be utilized in the current process or workflow. This boolean flag determines if the system should enable and display the technical adaptor form interface for user interaction or automated processing.\n\n**Field behavior**\n- When set to true, the technical adaptor form is activated and presented to the user or system.\n- When set to false, the technical adaptor form is bypassed or hidden.\n- Influences the flow of data input and validation related to technical adaptor configurations.\n\n**Implementation guidance**\n- Default to false if the technical adaptor form is optional or not always required.\n- Ensure that enabling this flag triggers all necessary UI components and backend processes related to the technical adaptor form.\n- Validate that dependent fields or modules respond appropriately when this flag changes state.\n\n**Examples**\n- true: The system displays the technical adaptor form for configuration.\n- false: The system skips the technical adaptor form and proceeds without it.\n\n**Important notes**\n- Changing this flag may affect downstream processing and data integrity.\n- Must be synchronized with user permissions and feature availability.\n- Should be clearly documented in user guides if exposed in UI settings.\n\n**Dependency chain**\n- May depend on user roles or feature toggles enabling technical adaptor functionality.\n- Could influence or be influenced by other configuration flags related to adaptor workflows.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Used in conditional logic to control UI rendering and backend processing paths.\n- May trigger event listeners or hooks when its value changes."},"distributedAdaptorData":{"type":"object","description":"Contains configuration and operational data specific to distributed adaptors used within the system. This data includes parameters, status information, and metadata necessary for managing and monitoring distributed adaptor instances across different nodes or environments. It facilitates synchronization, performance tuning, and error handling for adaptors operating in a distributed architecture.\n\n**Field behavior**\n- Holds detailed information about each distributed adaptor instance.\n- Supports dynamic updates to reflect real-time adaptor status and configuration changes.\n- Enables centralized management and monitoring of distributed adaptors.\n- May include nested structures representing various adaptor attributes and metrics.\n\n**Implementation guidance**\n- Ensure data consistency and integrity when updating distributedAdaptorData.\n- Use appropriate serialization formats to handle complex nested data.\n- Implement validation to verify the correctness of adaptor configuration parameters.\n- Design for scalability to accommodate varying numbers of distributed adaptors.\n\n**Examples**\n- Configuration settings for a distributed database adaptor including connection strings and timeout values.\n- Status reports indicating the health and performance metrics of adaptors deployed across multiple servers.\n- Metadata describing adaptor version, deployment environment, and last synchronization timestamp.\n\n**Important notes**\n- This property is critical for systems relying on distributed adaptors to function correctly.\n- Changes to this data should be carefully managed to avoid disrupting adaptor operations.\n- Sensitive information within adaptor data should be secured and access-controlled.\n\n**Dependency chain**\n- Dependent on the overall system configuration and deployment topology.\n- Interacts with monitoring and management modules that consume adaptor data.\n- May influence load balancing and failover mechanisms within the distributed system.\n\n**Technical details**\n- Typically represented as a complex object or map with key-value pairs.\n- May include arrays or lists to represent multiple adaptor instances.\n- Requires efficient parsing and serialization to minimize performance overhead.\n- Should support versioning to handle changes in adaptor data schema over time."},"filter":{"allOf":[{"description":"Filter configuration for selectively processing records during import operations.\n\n**Import-specific behavior**\n\n**Pre-Import Filtering**: Filters are applied to incoming records before they are sent to the destination system. Records that don't match the filter criteria are silently dropped and not imported.\n\n**Available filter fields**\n\nThe fields available for filtering are the data fields from each record being imported.\n"},{"$ref":"#/components/schemas/Filter"}]},"traceKeyTemplate":{"type":"string","description":"A template string used to generate unique trace keys for tracking and correlating requests across distributed systems. This template supports placeholders that can be dynamically replaced with runtime values such as timestamps, unique identifiers, or contextual metadata to create meaningful and consistent trace keys.\n\n**Field behavior**\n- Defines the format and structure of trace keys used in logging and monitoring.\n- Supports dynamic insertion of variables or placeholders to customize trace keys per request.\n- Ensures trace keys are unique and consistent for effective traceability.\n- Can be used to correlate logs, metrics, and traces across different services.\n\n**Implementation guidance**\n- Use clear and descriptive placeholders that map to relevant runtime data (e.g., {timestamp}, {requestId}).\n- Ensure the template produces trace keys that are unique enough to avoid collisions.\n- Validate the template syntax before applying it to avoid runtime errors.\n- Consider including service names or environment identifiers for multi-service deployments.\n- Keep the template concise to avoid excessively long trace keys.\n\n**Examples**\n- \"trace-{timestamp}-{requestId}\"\n- \"{serviceName}-{env}-{uniqueId}\"\n- \"traceKey-{userId}-{sessionId}-{epochMillis}\"\n\n**Important notes**\n- The template must be compatible with the tracing and logging infrastructure in use.\n- Placeholders should be well-defined and documented to ensure consistent usage.\n- Improper templates may lead to trace key collisions or loss of traceability.\n- Avoid sensitive information in trace keys to maintain security and privacy.\n\n**Dependency chain**\n- Relies on runtime context or metadata to replace placeholders.\n- Integrates with logging, monitoring, and tracing systems that consume trace keys.\n- May depend on unique identifier generators or timestamp providers.\n\n**Technical details**\n- Typically implemented as a string with placeholder syntax (e.g., curly braces).\n- Requires a parser or formatter to replace placeholders with actual values at runtime.\n- Should support extensibility to add new placeholders as needed.\n- May need to conform to character restrictions imposed by downstream systems."},"mockResponse":{"type":"object","description":"Defines a predefined response that the system will return when the associated API endpoint is invoked, allowing for simulation of real API behavior without requiring actual backend processing. This is particularly useful for testing, development, and demonstration purposes where consistent and controlled responses are needed.\n\n**Field behavior**\n- Returns the specified mock data instead of executing the real API logic.\n- Can simulate various response scenarios including success, error, and edge cases.\n- Overrides the default response when enabled.\n- Supports static or dynamic content depending on implementation.\n\n**Implementation guidance**\n- Ensure the mock response format matches the expected API response schema.\n- Use realistic data to closely mimic actual API behavior.\n- Update mock responses regularly to reflect changes in the real API.\n- Consider including headers, status codes, and body content in the mock response.\n- Provide clear documentation on when and how the mock response is used.\n\n**Examples**\n- Returning a fixed JSON object representing a user profile.\n- Simulating a 404 error response with an error message.\n- Providing a list of items with pagination metadata.\n- Mocking a successful transaction confirmation message.\n\n**Important notes**\n- Mock responses should not be used in production environments unless explicitly intended.\n- They are primarily for development, testing, and demonstration.\n- Overuse of mock responses can mask issues in the real API implementation.\n- Ensure that consumers of the API are aware when mock responses are active.\n\n**Dependency chain**\n- Dependent on the API endpoint configuration.\n- May interact with request parameters to determine the appropriate mock response.\n- Can be linked with feature flags or environment settings to toggle mock behavior.\n\n**Technical details**\n- Typically implemented as static JSON or XML payloads.\n- May include HTTP status codes and headers.\n- Can be integrated with API gateways or mocking tools.\n- Supports conditional logic in advanced implementations to vary responses."},"_ediProfileId":{"type":"string","format":"objectId","description":"The unique identifier for the Electronic Data Interchange (EDI) profile associated with the transaction or entity. This ID links the current data to a specific EDI configuration that defines the format, protocols, and trading partner details used for electronic communication.\n\n**Field behavior**\n- Uniquely identifies an EDI profile within the system.\n- Used to retrieve or reference EDI settings and parameters.\n- Must be consistent and valid to ensure proper EDI processing.\n- Typically immutable once assigned to maintain data integrity.\n\n**Implementation guidance**\n- Ensure the ID corresponds to an existing and active EDI profile in the system.\n- Validate the format and existence of the ID before processing transactions.\n- Use this ID to fetch EDI-specific rules, mappings, and partner information.\n- Secure the ID to prevent unauthorized changes or misuse.\n\n**Examples**\n- \"EDI12345\"\n- \"PROF-67890\"\n- \"X12_PROFILE_001\"\n\n**Important notes**\n- This ID is critical for routing and formatting EDI messages correctly.\n- Incorrect or missing IDs can lead to failed or misrouted EDI transmissions.\n- The ID should be managed centrally to avoid duplication or conflicts.\n\n**Dependency chain**\n- Depends on the existence of predefined EDI profiles in the system.\n- Used by EDI processing modules to apply correct communication standards.\n- May be linked to trading partner configurations and document types.\n\n**Technical details**\n- Typically a string or alphanumeric value.\n- Stored in a database with indexing for quick lookup.\n- May follow a naming convention defined by the organization or EDI standards.\n- Used as a foreign key in relational data models involving EDI transactions."},"parsers":{"type":["string","null"],"enum":["1"],"description":"A collection of parser configurations that define how input data should be interpreted and processed. Each parser specifies rules, patterns, or formats to extract meaningful information from raw data sources, enabling the system to handle diverse data types and structures effectively. This property allows customization and extension of parsing capabilities to accommodate various input formats.\n\n**Field behavior**\n- Accepts multiple parser definitions as an array or list.\n- Each parser operates independently to process specific data formats.\n- Parsers are applied in the order they are defined unless otherwise specified.\n- Supports enabling or disabling individual parsers dynamically.\n- Can include built-in or custom parser implementations.\n\n**Implementation guidance**\n- Ensure parsers are well-defined with clear matching criteria and extraction rules.\n- Validate parser configurations to prevent conflicts or overlaps.\n- Provide mechanisms to add, update, or remove parsers without downtime.\n- Support extensibility to integrate new parsing logic as needed.\n- Document each parser’s purpose and expected input/output formats.\n\n**Examples**\n- A JSON parser that extracts fields from JSON-formatted input.\n- A CSV parser that splits input lines into columns based on delimiters.\n- A regex-based parser that identifies patterns within unstructured text.\n- An XML parser that navigates hierarchical data structures.\n- A custom parser designed to interpret proprietary log file formats.\n\n**Important notes**\n- Incorrect parser configurations can lead to data misinterpretation or processing errors.\n- Parsers should be optimized for performance to handle large volumes of data efficiently.\n- Consider security implications when parsing untrusted input to avoid injection attacks.\n- Testing parsers with representative sample data is crucial for reliability.\n- Parsers may depend on external libraries or modules; ensure compatibility.\n\n**Dependency chain**\n- Relies on input data being available and accessible.\n- May depend on schema definitions or data contracts for accurate parsing.\n- Interacts with downstream components that consume parsed output.\n- Can be influenced by global settings such as character encoding or locale.\n- May require synchronization with data validation and transformation steps.\n\n**Technical details**\n- Typically implemented as modular components or plugins.\n- Configurations may include pattern definitions, field mappings, and error handling rules.\n- Supports various data formats including text, binary, and structured documents.\n- May expose APIs or interfaces for runtime configuration and monitoring.\n- Often integrated with logging and debugging tools to trace parsing operations."},"hooks":{"type":"object","properties":{"preMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the pre-mapping hook phase. This function allows for custom logic to be applied before the main mapping process begins, enabling data transformation, validation, or any preparatory steps required. It should be defined in a way that it can be invoked with the appropriate context and input data.\n\n**Field behavior**\n- Executed before the mapping process starts.\n- Receives input data and context for processing.\n- Can modify or validate data prior to mapping.\n- Should return the processed data or relevant output for the next stage.\n\n**Implementation guidance**\n- Define the function with clear input parameters and expected output.\n- Ensure the function handles errors gracefully to avoid interrupting the mapping flow.\n- Keep the function focused on pre-mapping concerns only.\n- Test the function independently to verify its behavior before integration.\n\n**Examples**\n- A function that normalizes input data formats.\n- A function that filters out invalid entries.\n- A function that enriches data with additional attributes.\n- A function that logs input data for auditing purposes.\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous operations.\n- Avoid side effects that could impact other parts of the mapping process.\n- Ensure compatibility with the overall data pipeline and mapping framework.\n- Document the function’s purpose and usage clearly for maintainability.\n\n**Dependency chain**\n- Invoked after the preMap hook is triggered.\n- Outputs data consumed by the subsequent mapping logic.\n- May depend on external utilities or services for data processing.\n\n**Technical details**\n- Typically implemented as a JavaScript or TypeScript function.\n- Receives parameters such as input data object and context metadata.\n- Returns a transformed data object or a promise resolving to it.\n- Integrated into the mapping engine’s hook system for execution timing."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed during the pre-mapping hook phase. This ID is used to reference and invoke the specific script that performs custom logic or transformations before the main mapping process begins.\n**Field behavior**\n- Specifies which script is triggered before the mapping operation.\n- Must correspond to a valid and existing script within the system.\n- Determines the custom pre-processing logic applied to data.\n**Implementation guidance**\n- Ensure the script ID is correctly registered and accessible in the environment.\n- Validate the script's compatibility with the preMap hook context.\n- Use consistent naming conventions for script IDs to avoid conflicts.\n**Examples**\n- \"script12345\"\n- \"preMapTransform_v2\"\n- \"hookScript_001\"\n**Important notes**\n- An invalid or missing script ID will result in the preMap hook being skipped or causing an error.\n- The script associated with this ID should be optimized for performance to avoid delays.\n- Permissions must be set appropriately to allow execution of the referenced script.\n**Dependency chain**\n- Depends on the existence of the script repository or script management system.\n- Linked to the preMap hook lifecycle event in the mapping process.\n- May interact with input data and influence subsequent mapping steps.\n**Technical details**\n- Typically represented as a string identifier.\n- Used internally by the system to locate and execute the script.\n- May be stored in a database or configuration file referencing available scripts."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the current operation or context. This ID is used to reference and manage the specific stack within the system, ensuring that all related resources and actions are correctly linked.  \n**Field behavior:**  \n- Uniquely identifies a stack instance within the environment.  \n- Used internally to track and manage stack-related operations.  \n- Typically immutable once assigned for a given stack lifecycle.  \n**Implementation guidance:**  \n- Should be generated or assigned by the system managing the stacks.  \n- Must be validated to ensure uniqueness within the scope of the application.  \n- Should be securely stored and transmitted to prevent unauthorized access or manipulation.  \n**Examples:**  \n- \"stack-12345abcde\"  \n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/50d6f0a0-1234-11e8-9a8b-0a1234567890\"  \n- \"projX-envY-stackZ\"  \n**Important notes:**  \n- This ID is critical for correlating resources and operations to the correct stack.  \n- Changing the stack ID after creation can lead to inconsistencies and errors.  \n- Should not be confused with other identifiers like resource IDs or deployment IDs.  \n**Dependency** CHAIN:  \n- Depends on the stack creation or registration process to obtain the ID.  \n- Used by hooks, deployment scripts, and resource managers to reference the stack.  \n**Technical details:**  \n- Typically a string value, possibly following a specific format or pattern.  \n- May include alphanumeric characters, dashes, and colons depending on the system.  \n- Often used as a key in databases or APIs to retrieve stack information."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the preMap hook, allowing customization of its execution prior to the mapping process. This property enables users to specify options such as input validation rules, transformation parameters, or conditional logic that tailor the hook's operation to specific requirements.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the preMap hook.\n- Influences how the preMap hook processes data before mapping occurs.\n- Can enable or disable specific features or validations within the hook.\n- Supports dynamic adjustment of hook behavior based on provided settings.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema to ensure correctness.\n- Provide default values for optional settings to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n- Allow extensibility to accommodate future configuration parameters without breaking compatibility.\n\n**Examples**\n- Setting a flag to enable strict input validation before mapping.\n- Specifying a list of fields to exclude from the mapping process.\n- Defining transformation rules such as trimming whitespace or converting case.\n- Providing conditional logic parameters to skip mapping under certain conditions.\n\n**Important notes**\n- Incorrect configuration values may cause the preMap hook to fail or behave unexpectedly.\n- Configuration should be tested thoroughly to ensure it aligns with the intended data processing flow.\n- Changes to configuration may affect downstream processes relying on the mapped data.\n- Sensitive information should not be included in the configuration to avoid security risks.\n\n**Dependency chain**\n- Depends on the preMap hook being enabled and invoked during the processing pipeline.\n- Influences the input data that will be passed to subsequent mapping stages.\n- May interact with other hook configurations or global settings affecting data transformation.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and applied at runtime before the mapping logic executes.\n- Supports nested configuration properties for complex customization.\n- Should be immutable during the execution of the preMap hook to ensure consistency."}},"description":"A hook function that is executed before the mapping process begins. This function allows for custom preprocessing or transformation of data prior to the main mapping logic being applied. It is typically used to modify input data, validate conditions, or set up necessary context for the mapping operation.\n\n**Field behavior**\n- Invoked once before the mapping starts.\n- Receives the raw input data or context.\n- Can modify or replace the input data before mapping.\n- Can abort or alter the flow based on custom logic.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment.\n- Ensure it returns the modified data or context for the mapping to proceed.\n- Handle errors gracefully to avoid disrupting the mapping process.\n- Use for data normalization, enrichment, or validation before mapping.\n\n**Examples**\n- Sanitizing input strings to remove unwanted characters.\n- Adding default values to missing fields.\n- Logging or auditing input data before transformation.\n- Validating input schema and throwing errors if invalid.\n\n**Important notes**\n- This hook is optional; if not provided, mapping proceeds with original input.\n- Changes made here directly affect the mapping outcome.\n- Avoid heavy computations to prevent performance bottlenecks.\n- Should not perform side effects that depend on mapping results.\n\n**Dependency chain**\n- Executed before the main mapping function.\n- Influences the data passed to subsequent mapping hooks or steps.\n- May affect downstream validation or output generation.\n\n**Technical details**\n- Typically implemented as a function with signature (inputData, context) => modifiedData.\n- Supports both synchronous return values and Promises for async operations.\n- Integrated into the mapping pipeline as the first step.\n- Can be configured or replaced depending on mapping framework capabilities."},"postMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed during the post-map hook phase. This function is invoked after the mapping process completes, allowing for custom processing, validation, or transformation of the mapped data. It should be defined within the appropriate scope and adhere to the expected signature for post-map hook functions.\n\n**Field behavior**\n- Defines the callback function to run after the mapping operation.\n- Enables customization and extension of the mapping logic.\n- Executes only once per mapping cycle during the post-map phase.\n- Can modify or augment the mapped data before final output.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function in the execution context.\n- The function should handle any necessary error checking and data validation.\n- Avoid long-running or blocking operations within the function to maintain performance.\n- Document the function’s expected inputs and outputs clearly.\n\n**Examples**\n- \"validateMappedData\"\n- \"transformPostMapping\"\n- \"logMappingResults\"\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous behavior if supported.\n- If the function is not defined or invalid, the post-map hook will be skipped without error.\n- This hook is optional but recommended for complex mapping scenarios requiring additional processing.\n\n**Dependency chain**\n- Depends on the mapping process completing successfully.\n- May rely on data structures produced by the mapping phase.\n- Should be compatible with other hooks or lifecycle events in the mapping pipeline.\n\n**Technical details**\n- Expected to be a string representing the function name.\n- The function should accept parameters as defined by the hook interface.\n- Execution context must have access to the function at runtime.\n- Errors thrown within the function may affect the overall mapping operation depending on error handling policies."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed in the postMap hook. This ID is used to reference and invoke the specific script after the mapping process completes, enabling custom logic or transformations to be applied.  \n**Field behavior:**  \n- Accepts a string representing the script's unique ID.  \n- Must correspond to an existing script within the system.  \n- Triggers the execution of the associated script after the mapping operation finishes.  \n**Implementation guidance:**  \n- Ensure the script ID is valid and the script is properly deployed before referencing it here.  \n- Use this field to extend or customize the behavior of the mapping process via scripting.  \n- Validate the script ID to prevent runtime errors during hook execution.  \n**Examples:**  \n- \"script_12345\"  \n- \"postMapTransformScript\"  \n- \"customHookScript_v2\"  \n**Important notes:**  \n- The script referenced must be compatible with the postMap hook context.  \n- Incorrect or missing script IDs will result in the hook not executing as intended.  \n- This field is optional if no post-mapping script execution is required.  \n**Dependency chain:**  \n- Depends on the existence of the script repository or script management system.  \n- Relies on the postMap hook lifecycle event to trigger execution.  \n**Technical details:**  \n- Typically a string data type.  \n- Used internally to fetch and execute the script logic.  \n- May require permissions or access rights to execute the referenced script."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-map hook. This ID is used to reference and manage the specific stack instance within the system, enabling precise tracking and manipulation of resources or configurations tied to that stack during post-processing operations.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used during post-map hook execution to associate actions with the correct stack.\n- Immutable once set to ensure consistent reference throughout the lifecycle.\n- Required for operations that involve stack-specific context or resource management.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be validated for format and existence before use.\n- Ensure secure handling to prevent unauthorized access or manipulation.\n- Typically generated by the system and passed to hooks automatically.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for linking hook operations to the correct stack context.\n- Incorrect or missing _stackId can lead to failed or misapplied post-map operations.\n- Should not be altered manually once assigned.\n\n**Dependency chain**\n- Depends on the stack creation or registration process that generates the ID.\n- Used by post-map hooks to perform stack-specific logic.\n- May influence downstream processes that rely on stack context.\n\n**Technical details**\n- Typically a string value.\n- Format may vary depending on the system's stack naming conventions.\n- Stored and transmitted securely to maintain integrity.\n- Used as a key in database queries or API calls related to stack management."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the postMap hook. This object allows customization of the hook's execution by specifying various options and values that control its operation.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the postMap hook.\n- Determines how the postMap hook processes data after mapping.\n- Can include flags, thresholds, or other parameters influencing hook logic.\n- Optional or required depending on the specific implementation of the postMap hook.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema before use.\n- Ensure all required fields within the configuration are present and correctly typed.\n- Use default values for any missing optional parameters to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n\n**Examples**\n- `{ \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"timeout\": 5000, \"retryOnFailure\": false }`\n- `{ \"mappingStrategy\": \"strict\", \"caseSensitive\": true }`\n\n**Important notes**\n- Incorrect configuration values may cause the postMap hook to fail or behave unexpectedly.\n- Configuration should be tailored to the specific needs of the postMap operation.\n- Changes to configuration may require restarting or reinitializing the hook process.\n\n**Dependency chain**\n- Depends on the postMap hook being invoked.\n- May influence downstream processes that rely on the output of the postMap hook.\n- Interacts with other hook configurations if multiple hooks are chained.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and validated at runtime before the hook executes.\n- Supports nested objects and arrays if the hook logic requires complex configurations."}},"description":"A hook function that is executed after the mapping process is completed. This function allows for custom logic to be applied to the mapped data, enabling modifications, validations, or side effects based on the results of the mapping operation. It receives the mapped data as input and can return a modified version or perform asynchronous operations.\n\n**Field behavior**\n- Invoked immediately after the mapping process finishes.\n- Receives the output of the mapping as its input parameter.\n- Can modify, augment, or validate the mapped data.\n- Supports synchronous or asynchronous execution.\n- The returned value from this hook replaces or updates the mapped data.\n\n**Implementation guidance**\n- Implement as a function that accepts the mapped data object.\n- Ensure proper handling of asynchronous operations if needed.\n- Use this hook to enforce business rules or data transformations post-mapping.\n- Avoid side effects that could interfere with subsequent processing unless intentional.\n- Validate the integrity of the data before returning it.\n\n**Examples**\n- Adjusting date formats or normalizing strings after mapping.\n- Adding computed properties based on mapped fields.\n- Logging or auditing mapped data for monitoring purposes.\n- Filtering out unwanted fields or entries from the mapped result.\n- Triggering notifications or events based on mapped data content.\n\n**Important notes**\n- This hook is optional and only invoked if defined.\n- Errors thrown within this hook may affect the overall mapping operation.\n- The hook should be performant to avoid slowing down the mapping pipeline.\n- Returned data must conform to expected schema to prevent downstream errors.\n\n**Dependency chain**\n- Depends on the completion of the primary mapping function.\n- May influence subsequent processing steps that consume the mapped data.\n- Should be coordinated with pre-mapping hooks to maintain data consistency.\n\n**Technical details**\n- Typically implemented as a callback or promise-returning function.\n- Receives one argument: the mapped data object.\n- Returns either the modified data object or a promise resolving to it.\n- Integrated into the mapping lifecycle after the main mapping logic completes."},"postSubmit":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed after the submission process completes. This function is typically used to perform post-submission tasks such as data processing, notifications, or cleanup operations.\n\n**Field behavior**\n- Invoked automatically after the main submission event finishes.\n- Can trigger additional workflows or side effects based on submission results.\n- Supports asynchronous execution depending on the implementation.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function within the execution context.\n- Validate that the function handles errors gracefully to avoid disrupting the overall submission flow.\n- Document the expected input parameters and output behavior of the function for maintainability.\n\n**Examples**\n- \"sendConfirmationEmail\"\n- \"logSubmissionData\"\n- \"updateUserStatus\"\n\n**Important notes**\n- The function must be idempotent if the submission process can be retried.\n- Avoid long-running operations within this function to prevent blocking the submission pipeline.\n- Security considerations should be taken into account, especially if the function interacts with external systems.\n\n**Dependency chain**\n- Depends on the successful completion of the submission event.\n- May rely on data generated or modified during the submission process.\n- Could trigger downstream hooks or events based on its execution.\n\n**Technical details**\n- Typically referenced by name as a string.\n- Execution context must have access to the function definition.\n- May support both synchronous and asynchronous invocation patterns depending on the platform."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the submission process completes. This ID links the post-submit hook to a specific script that performs additional operations or custom logic once the main submission workflow has finished.\n\n**Field behavior**\n- Specifies which script is triggered automatically after the submission event.\n- Must correspond to a valid and existing script within the system.\n- Controls post-processing actions such as notifications, data transformations, or logging.\n\n**Implementation guidance**\n- Ensure the script ID is correctly referenced and the script is deployed before assigning this property.\n- Validate the script’s permissions and runtime environment compatibility.\n- Use consistent naming or ID conventions to avoid conflicts or errors.\n\n**Examples**\n- \"script_12345\"\n- \"postSubmitCleanupScript\"\n- \"notifyUserScript\"\n\n**Important notes**\n- If the script ID is invalid or missing, the post-submit hook will not execute.\n- Changes to the script ID require redeployment or configuration updates.\n- The script should be idempotent and handle errors gracefully to avoid disrupting the submission flow.\n\n**Dependency chain**\n- Depends on the existence of the script resource identified by this ID.\n- Linked to the post-submit hook configuration within the submission workflow.\n- May interact with other hooks or system components triggered after submission.\n\n**Technical details**\n- Typically represented as a string identifier.\n- Must be unique within the scope of available scripts.\n- Used by the system to locate and invoke the corresponding script at runtime."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-submit hook. This ID is used to reference and manage the specific stack instance within the system, enabling operations such as updates, deletions, or status checks after a submission event.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used to link post-submit actions to the correct stack.\n- Typically immutable once assigned to ensure consistent referencing.\n- Required for executing stack-specific post-submit logic.\n\n**Implementation guidance**\n- Ensure the ID is generated following the system’s unique identification standards.\n- Validate the presence and format of the stack ID before processing post-submit hooks.\n- Use this ID to fetch or manipulate stack-related data during post-submit operations.\n- Handle cases where the stack ID may not exist or is invalid gracefully.\n\n**Examples**\n- \"stack-12345\"\n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/abcde12345\"\n- \"proj-stack-v2\"\n\n**Important notes**\n- This ID must correspond to an existing stack within the environment.\n- Incorrect or missing stack IDs can cause post-submit hooks to fail.\n- The stack ID is critical for audit trails and debugging post-submit processes.\n\n**Dependency chain**\n- Depends on the stack creation or registration process to generate the ID.\n- Used by post-submit hook handlers to identify the target stack.\n- May be referenced by logging, monitoring, or notification systems post-submission.\n\n**Technical details**\n- Typically a string value.\n- May follow a specific format such as UUID, ARN, or custom naming conventions.\n- Stored and transmitted securely to prevent unauthorized access or manipulation.\n- Should be indexed in databases for efficient retrieval during post-submit operations."},"configuration":{"type":"object","description":"Configuration settings for the post-submit hook that define its behavior and parameters. This object contains key-value pairs specifying how the hook should operate after a submission event, including any necessary options, flags, or environment variables.\n\n**Field behavior**\n- Determines the specific actions and parameters used by the post-submit hook.\n- Can include nested settings tailored to the hook’s functionality.\n- Modifies the execution flow or output based on provided configuration values.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema for the post-submit hook.\n- Support extensibility to allow additional parameters without breaking existing functionality.\n- Ensure sensitive information within the configuration is handled securely.\n- Provide clear error messages if required configuration fields are missing or invalid.\n\n**Examples**\n- Setting a timeout duration for the post-submit process.\n- Specifying environment variables needed during hook execution.\n- Enabling or disabling certain features of the post-submit hook via flags.\n\n**Important notes**\n- The configuration must align with the capabilities of the post-submit hook implementation.\n- Incorrect or incomplete configuration may cause the hook to fail or behave unexpectedly.\n- Configuration changes typically require validation and testing before deployment.\n\n**Dependency chain**\n- Depends on the post-submit hook being triggered successfully.\n- May interact with other hooks or system components based on configuration.\n- Relies on the underlying system to interpret and apply configuration settings correctly.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Supports nested objects and arrays for complex configurations.\n- May include string, numeric, boolean, or null values depending on parameters.\n- Parsed and applied at runtime during the post-submit hook execution phase."}},"description":"A callback function or hook that is executed immediately after a form submission process completes. This hook allows for custom logic to be run post-submission, such as handling responses, triggering notifications, updating UI elements, or performing cleanup tasks.\n\n**Field behavior**\n- Invoked only after the form submission has finished, regardless of success or failure.\n- Receives submission result data or error information as arguments.\n- Can be asynchronous to support operations like API calls or state updates.\n- Does not affect the submission process itself but handles post-submission side effects.\n\n**Implementation guidance**\n- Ensure the function handles both success and error scenarios gracefully.\n- Avoid long-running synchronous operations to prevent blocking the UI.\n- Use this hook to trigger any follow-up actions such as analytics tracking or user feedback.\n- Validate that the hook is defined before invocation to prevent runtime errors.\n\n**Examples**\n- Logging submission results to the console.\n- Displaying a success message or error notification to the user.\n- Redirecting the user to a different page after submission.\n- Resetting form fields or updating application state based on submission outcome.\n\n**Important notes**\n- This hook is optional; if not provided, no post-submission actions will be performed.\n- It should not be used to modify the submission data itself; that should be handled before submission.\n- Proper error handling within this hook is crucial to avoid unhandled exceptions.\n\n**Dependency chain**\n- Depends on the form submission process completing.\n- May interact with state management or UI components updated after submission.\n- Can be linked with pre-submit hooks for comprehensive form lifecycle management.\n\n**Technical details**\n- Typically implemented as a function accepting parameters such as submission response and error.\n- Can return a promise to support asynchronous operations.\n- Should be registered in the form configuration under the hooks.postSubmit property.\n- Execution context may vary depending on the form library or framework used."},"postAggregate":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the post-aggregation hook phase. This function is invoked after the aggregation process completes, allowing for custom processing, transformation, or validation of the aggregated data before it is returned or further processed. The function should be designed to handle the aggregated data structure and can modify, enrich, or analyze the results as needed.\n\n**Field behavior**\n- Executed after the aggregation operation finishes.\n- Receives the aggregated data as input.\n- Can modify or augment the aggregated results.\n- Supports custom logic tailored to specific post-processing requirements.\n\n**Implementation guidance**\n- Ensure the function signature matches the expected input and output formats.\n- Handle potential errors within the function to avoid disrupting the overall aggregation flow.\n- Keep the function efficient to minimize latency in the post-aggregation phase.\n- Validate the output of the function to maintain data integrity.\n\n**Examples**\n- A function that filters aggregated results based on certain criteria.\n- A function that calculates additional metrics from the aggregated data.\n- A function that formats or restructures the aggregated output for downstream consumption.\n\n**Important notes**\n- The function must be deterministic and side-effect free to ensure consistent results.\n- Avoid long-running operations within the function to prevent performance bottlenecks.\n- The function should not alter the original data source, only the aggregated output.\n\n**Dependency chain**\n- Depends on the completion of the aggregation process.\n- May rely on the schema or structure of the aggregated data.\n- Can be linked to subsequent hooks or processing steps that consume the modified data.\n\n**Technical details**\n- Typically implemented as a callback or lambda function.\n- May be written in the same language as the aggregation engine or as a supported scripting language.\n- Should conform to the API’s expected interface for hook functions.\n- Execution context may include metadata about the aggregation operation."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the aggregation process completes. This ID links the post-aggregation hook to a specific script that contains the logic or operations to be performed. It ensures that the correct script is triggered in the post-aggregation phase, enabling customized processing or data manipulation.\n\n**Field behavior**\n- Accepts a string representing the script's unique ID.\n- Used exclusively in the post-aggregation hook context.\n- Triggers the execution of the associated script after aggregation.\n- Must correspond to an existing and accessible script within the system.\n\n**Implementation guidance**\n- Validate that the script ID exists and is active before assignment.\n- Ensure the script linked by this ID is compatible with post-aggregation data.\n- Handle errors gracefully if the script ID is invalid or the script execution fails.\n- Maintain security by restricting script IDs to authorized scripts only.\n\n**Examples**\n- \"script12345\"\n- \"postAggScript_v2\"\n- \"cleanup_after_aggregation\"\n\n**Important notes**\n- The script ID must be unique within the scope of available scripts.\n- Changing the script ID will alter the behavior of the post-aggregation hook.\n- The script referenced should be optimized for performance to avoid delays.\n- Ensure proper permissions are set for the script to execute in this context.\n\n**Dependency chain**\n- Depends on the existence of a script repository or registry.\n- Relies on the aggregation process completing successfully before execution.\n- Interacts with the postAggregate hook mechanism to trigger execution.\n\n**Technical details**\n- Typically represented as a string data type.\n- May be stored in a database or configuration file referencing script metadata.\n- Used by the system's execution engine to locate and run the script.\n- Supports integration with scripting languages or environments supported by the platform."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-aggregation hook. This ID is used to reference and manage the specific stack context within which the hook operates, ensuring that the correct stack resources and configurations are applied during the post-aggregation process.\n\n**Field behavior**\n- Uniquely identifies the stack instance related to the post-aggregation hook.\n- Used to link the hook execution context to the appropriate stack environment.\n- Immutable once set to maintain consistency throughout the hook lifecycle.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be assigned automatically by the system or provided explicitly during hook configuration.\n- Ensure proper validation to prevent referencing non-existent or unauthorized stacks.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for resolving stack-specific resources and permissions.\n- Incorrect or missing _stackId values can lead to hook execution failures.\n- Should be handled securely to prevent unauthorized access to stack data.\n\n**Dependency chain**\n- Depends on the stack management system to generate and maintain stack IDs.\n- Utilized by the post-aggregation hook execution engine to retrieve stack context.\n- May influence downstream processes that rely on stack-specific configurations.\n\n**Technical details**\n- Typically represented as a string.\n- Format and length may vary depending on the stack management system.\n- Should be indexed or cached for efficient lookup during hook execution."},"configuration":{"type":"object","description":"Configuration settings for the post-aggregate hook that define its behavior and parameters during execution. This property allows customization of the hook's operation by specifying key-value pairs or structured data relevant to the hook's logic.\n\n**Field behavior**\n- Accepts structured data or key-value pairs that tailor the hook's functionality.\n- Influences how the post-aggregate hook processes data after aggregation.\n- Can be optional or required depending on the specific hook implementation.\n- Supports dynamic adjustment of hook behavior without code changes.\n\n**Implementation guidance**\n- Validate configuration inputs to ensure they conform to expected formats and types.\n- Document all configurable options clearly for users to understand their impact.\n- Allow for extensibility to accommodate future configuration parameters.\n- Ensure secure handling of configuration data to prevent injection or misuse.\n\n**Examples**\n- {\"threshold\": 10, \"mode\": \"strict\"}\n- {\"enableLogging\": true, \"retryCount\": 3}\n- {\"filters\": [\"status:active\", \"region:us-east\"]}\n\n**Important notes**\n- Incorrect configuration values may cause the hook to malfunction or produce unexpected results.\n- Configuration should be versioned or tracked to maintain consistency across deployments.\n- Sensitive information should not be included in configuration unless properly encrypted or secured.\n\n**Dependency chain**\n- Depends on the specific post-aggregate hook implementation to interpret configuration.\n- May interact with other hook properties such as conditions or triggers.\n- Configuration changes can affect downstream processing or output of the aggregation.\n\n**Technical details**\n- Typically represented as a JSON object or map structure.\n- Parsed and applied at runtime when the post-aggregate hook is invoked.\n- Supports nested structures for complex configuration scenarios.\n- Should be compatible with the overall API schema and validation rules."}},"description":"A hook function that is executed after the aggregate operation has completed. This hook allows for custom logic to be applied to the aggregated results before they are returned or further processed. It is typically used to modify, filter, or augment the aggregated data based on specific application requirements.\n\n**Field behavior**\n- Invoked immediately after the aggregate query execution.\n- Receives the aggregated results as input.\n- Can modify or replace the aggregated data.\n- Supports asynchronous operations if needed.\n- Does not affect the execution of the aggregate operation itself.\n\n**Implementation guidance**\n- Ensure the hook handles errors gracefully to avoid disrupting the response.\n- Use this hook to implement custom transformations or validations on aggregated data.\n- Avoid heavy computations to maintain performance.\n- Return the modified data or the original data if no changes are needed.\n- Consider security implications when modifying aggregated results.\n\n**Examples**\n- Filtering out sensitive fields from aggregated results.\n- Adding computed properties based on aggregation output.\n- Logging or auditing aggregated data for monitoring purposes.\n- Transforming aggregated data into a different format before sending to clients.\n\n**Important notes**\n- This hook is specific to aggregate operations and will not trigger on other query types.\n- Modifications in this hook do not affect the underlying database.\n- The hook should return the final data to be used downstream.\n- Properly handle asynchronous code to ensure the hook completes before response.\n\n**Dependency chain**\n- Triggered after the aggregate query execution phase.\n- Can influence the data passed to subsequent hooks or response handlers.\n- Dependent on the successful completion of the aggregate operation.\n\n**Technical details**\n- Typically implemented as a function accepting the aggregated results and context.\n- Supports both synchronous and asynchronous function signatures.\n- Receives parameters such as the aggregated data, query context, and hook metadata.\n- Expected to return the processed aggregated data or a promise resolving to it."}},"description":"Defines a collection of hook functions or callbacks that are triggered at specific points during the execution lifecycle of a process or operation. These hooks allow customization and extension of default behavior by executing user-defined logic before, after, or during certain events.\n\n**Field behavior**\n- Supports multiple hook functions, each associated with a particular event or lifecycle stage.\n- Hooks are invoked automatically by the system when their corresponding event occurs.\n- Can be used to modify input data, handle side effects, perform validations, or trigger additional processes.\n- Execution order of hooks may be sequential or parallel depending on implementation.\n\n**Implementation guidance**\n- Ensure hooks are registered with clear event names or identifiers.\n- Validate hook functions for correct signature and expected parameters.\n- Provide error handling within hooks to avoid disrupting the main process.\n- Document available hook points and expected behavior for each.\n- Allow hooks to be asynchronous if the environment supports it.\n\n**Examples**\n- A \"beforeSave\" hook that validates data before saving to a database.\n- An \"afterFetch\" hook that formats or enriches data after retrieval.\n- A \"beforeDelete\" hook that checks permissions before allowing deletion.\n- A \"onError\" hook that logs errors or sends notifications.\n\n**Important notes**\n- Hooks should not introduce significant latency or side effects that impact core functionality.\n- Avoid circular dependencies or infinite loops caused by hooks triggering each other.\n- Security considerations must be taken into account when executing user-defined hooks.\n- Hooks may have access to sensitive data; ensure proper access controls.\n\n**Dependency chain**\n- Hooks depend on the lifecycle events or triggers defined by the system.\n- Hook execution may depend on the successful completion of prior steps.\n- Hook results can influence subsequent processing stages or final outcomes.\n\n**Technical details**\n- Typically implemented as functions, methods, or callbacks registered in a map or list.\n- May support synchronous or asynchronous execution models.\n- Can accept parameters such as context objects, event data, or state information.\n- Return values from hooks may be used to modify behavior or data flow.\n- Often integrated with event emitter or observer patterns."},"sampleResponseData":{"type":"object","description":"Contains example data representing a typical response returned by the API endpoint. This sample response data helps developers understand the structure, format, and types of values they can expect when interacting with the API. It serves as a reference for testing, debugging, and documentation purposes.\n\n**Field behavior**\n- Represents a static example of the API's response payload.\n- Should reflect the most common or typical response scenario.\n- May include nested objects, arrays, and various data types to illustrate the response structure.\n- Does not affect the actual API behavior or response generation.\n\n**Implementation guidance**\n- Ensure the sample data is accurate and up-to-date with the current API response schema.\n- Include all required fields and typical optional fields to provide a comprehensive example.\n- Use realistic values that clearly demonstrate the data format and constraints.\n- Update the sample response whenever the API response structure changes.\n\n**Examples**\n- A JSON object showing user details returned from a user info endpoint.\n- An array of product items with fields like id, name, price, and availability.\n- A nested object illustrating a complex response with embedded metadata and links.\n\n**Important notes**\n- This data is for illustrative purposes only and should not be used as actual input or output.\n- It helps consumers of the API to quickly grasp the expected response format.\n- Should be consistent with the API specification and schema definitions.\n\n**Dependency chain**\n- Depends on the API endpoint's response schema and data model.\n- Should align with any validation rules or constraints defined for the response.\n- May be linked to example requests or other documentation elements for completeness.\n\n**Technical details**\n- Typically formatted as a JSON or XML snippet matching the API's response content type.\n- May include placeholder values or sample identifiers.\n- Should be parsable and valid according to the API's response schema."},"responseTransform":{"allOf":[{"description":"Data transformation configuration for reshaping API response data during import operations.\n\n**Import-specific behavior**\n\n**Response Transformation**: Transforms the response data returned by the destination system after records have been imported. This allows you to reshape, filter, or enrich the response before it is processed by downstream flow steps or returned to the client.\n\n**Common Use Cases**:\n- Extracting relevant fields from verbose API responses\n- Normalizing response formats across different destination systems\n- Adding computed fields based on response data\n- Filtering out sensitive information before logging or further processing\n"},{"$ref":"#/components/schemas/Transform"}]},"modelMetadata":{"type":"object","description":"Contains detailed metadata about the model, including its version, architecture, training data characteristics, and any relevant configuration parameters that define its behavior and capabilities. This information helps in understanding the model's provenance, performance expectations, and compatibility with various tasks or environments.\n**Field behavior**\n- Captures comprehensive details about the model's identity and configuration.\n- May include version numbers, architecture type, training dataset descriptions, and hyperparameters.\n- Used for tracking model lineage and ensuring reproducibility.\n- Supports validation and compatibility checks before deployment or usage.\n**Implementation guidance**\n- Ensure metadata is accurate and up-to-date with the current model iteration.\n- Include standardized fields for easy parsing and comparison across models.\n- Use consistent formatting and naming conventions for metadata attributes.\n- Allow extensibility to accommodate future metadata elements without breaking compatibility.\n**Examples**\n- Model version: \"v1.2.3\"\n- Architecture: \"Transformer-based, 12 layers, 768 hidden units\"\n- Training data: \"Dataset XYZ, 1 million samples, balanced classes\"\n- Hyperparameters: \"learning_rate=0.001, batch_size=32\"\n**Important notes**\n- Metadata should be immutable once the model is finalized to ensure traceability.\n- Sensitive information such as proprietary training data details should be handled carefully.\n- Metadata completeness directly impacts model governance and audit processes.\n- Incomplete or inaccurate metadata can lead to misuse or misinterpretation of the model.\n**Dependency chain**\n- Relies on accurate model training and version control systems.\n- Interacts with deployment pipelines that validate model compatibility.\n- Supports monitoring systems that track model performance over time.\n**Technical details**\n- Typically represented as a structured object or JSON document.\n- May include nested fields for complex metadata attributes.\n- Should be easily serializable and deserializable for storage and transmission.\n- Can be linked to external documentation or repositories for extended information."},"mapping":{"type":"object","properties":{"fields":{"type":"string","enum":["string","number","boolean","numberarray","stringarray","json"],"description":"Defines the collection of individual field mappings within the overall mapping configuration. Each entry specifies the characteristics, data types, and indexing options for a particular field in the dataset or document structure. This property enables precise control over how each field is interpreted, stored, and queried by the system.\n\n**Field behavior**\n- Contains a set of key-value pairs where each key is a field name and the value is its mapping definition.\n- Determines how data in each field is processed, indexed, and searched.\n- Supports nested fields and complex data structures.\n- Can include settings such as data type, analyzers, norms, and indexing options.\n\n**Implementation guidance**\n- Define all relevant fields explicitly to optimize search and storage behavior.\n- Use consistent naming conventions for field names.\n- Specify appropriate data types to ensure correct parsing and querying.\n- Include nested mappings for objects or arrays as needed.\n- Validate field definitions to prevent conflicts or errors.\n\n**Examples**\n- Mapping a text field with a custom analyzer.\n- Defining a date field with a specific format.\n- Specifying a keyword field for exact match searches.\n- Creating nested object fields with their own sub-fields.\n\n**Important notes**\n- Omitting fields may lead to default dynamic mapping behavior, which might not be optimal.\n- Incorrect field definitions can cause indexing errors or unexpected query results.\n- Changes to field mappings often require reindexing of existing data.\n- Field names should avoid reserved characters or keywords.\n\n**Dependency chain**\n- Depends on the overall mapping configuration context.\n- Influences indexing and search components downstream.\n- Interacts with analyzers, tokenizers, and query parsers.\n\n**Technical details**\n- Typically represented as a JSON or YAML object with field names as keys.\n- Each field mapping includes properties like \"type\", \"index\", \"analyzer\", \"fields\", etc.\n- Supports complex types such as objects, nested, geo_point, and geo_shape.\n- May include metadata fields for internal use."},"lists":{"type":"string","enum":["string","number","boolean","numberarray","stringarray"],"description":"A collection of lists that define specific groupings or categories used within the mapping context. Each list contains a set of related items or values that are referenced to organize, filter, or map data effectively. These lists facilitate structured data handling and improve the clarity and maintainability of the mapping configuration.\n\n**Field behavior**\n- Contains multiple named lists, each representing a distinct category or grouping.\n- Lists can be referenced elsewhere in the mapping to apply consistent logic or transformations.\n- Supports dynamic or static content depending on the mapping requirements.\n- Enables modular and reusable data definitions within the mapping.\n\n**Implementation guidance**\n- Ensure each list has a unique identifier or name for clear referencing.\n- Populate lists with relevant and validated items to avoid mapping errors.\n- Use lists to centralize repeated values or categories to simplify updates.\n- Consider the size and complexity of lists to maintain performance and readability.\n\n**Examples**\n- A list of country codes used for regional mapping.\n- A list of product categories for classification purposes.\n- A list of status codes to standardize state representation.\n- A list of user roles for access control mapping.\n\n**Important notes**\n- Lists should be kept up-to-date to reflect current data requirements.\n- Avoid duplication of items across different lists unless intentional.\n- The structure and format of list items must align with the overall mapping schema.\n- Changes to lists may impact dependent mapping logic; test thoroughly after updates.\n\n**Dependency chain**\n- Lists may depend on external data sources or configuration files.\n- Other mapping properties or rules may reference these lists for validation or transformation.\n- Updates to lists can cascade to affect downstream processing or output.\n\n**Technical details**\n- Typically represented as arrays or collections within the mapping schema.\n- Items within lists can be simple values (strings, numbers) or complex objects.\n- Supports nesting or hierarchical structures if the schema allows.\n- May include metadata or annotations to describe list purpose or usage."}},"description":"Defines the association between input data fields and their corresponding target fields or processing rules within the system. This mapping specifies how data should be transformed, routed, or interpreted during processing to ensure accurate and consistent handling. It serves as a blueprint for data translation and integration tasks, enabling seamless interoperability between different data formats or components.\n\n**Field behavior**\n- Maps source fields to target fields or processing instructions.\n- Determines how input data is transformed or routed.\n- Can include nested or hierarchical mappings for complex data structures.\n- Supports conditional or dynamic mapping rules based on input values.\n\n**Implementation guidance**\n- Ensure mappings are clearly defined and validated to prevent data loss or misinterpretation.\n- Use consistent naming conventions for source and target fields.\n- Support extensibility to accommodate new fields or transformation rules.\n- Provide mechanisms for error handling when mappings fail or are incomplete.\n\n**Examples**\n- Mapping a JSON input field \"user_name\" to a database column \"username\".\n- Defining a transformation rule that converts date formats from \"MM/DD/YYYY\" to \"YYYY-MM-DD\".\n- Routing data from an input field \"status\" to different processing modules based on its value.\n\n**Important notes**\n- Incorrect or incomplete mappings can lead to data corruption or processing errors.\n- Mapping definitions should be version-controlled to track changes over time.\n- Consider performance implications when applying complex or large-scale mappings.\n\n**Dependency chain**\n- Dependent on the structure and schema of input data.\n- Influences downstream data processing, validation, and storage components.\n- May interact with transformation, validation, and routing modules.\n\n**Technical details**\n- Typically represented as key-value pairs, dictionaries, or mapping objects.\n- May support expressions or functions for dynamic value computation.\n- Can be serialized in formats such as JSON, YAML, or XML for configuration.\n- Should include metadata for data types, optionality, and default values where applicable."},"mappings":{"allOf":[{"description":"Field mapping configuration for transforming incoming records during import operations.\n\n**Import-specific behavior**\n\n**Data Transformation**: Mappings define how fields from incoming records are transformed and mapped to the destination system's field structure. This enables data normalization, field renaming, value transformation, and complex nested object construction.\n\n**Common Use Cases**:\n- Mapping source field names to destination field names\n- Transforming data types and formats\n- Building nested object structures required by the destination\n- Applying formulas and lookups to derive field values\n"},{"$ref":"#/components/schemas/Mappings"}]},"lookups":{"type":"string","description":"A collection of predefined reference data or mappings used to standardize and validate input values within the system. Lookups typically consist of key-value pairs or enumerations that facilitate consistent data interpretation and reduce errors by providing a controlled vocabulary or set of options.\n\n**Field behavior**\n- Serves as a centralized source for reference data used across various components.\n- Enables validation and normalization of input values against predefined sets.\n- Supports dynamic retrieval of lookup values to accommodate updates without code changes.\n- May include hierarchical or categorized data structures for complex reference data.\n\n**Implementation guidance**\n- Ensure lookups are comprehensive and cover all necessary reference data for the application domain.\n- Design lookups to be easily extendable and maintainable, allowing additions or modifications without impacting existing functionality.\n- Implement caching mechanisms if lookups are frequently accessed to optimize performance.\n- Provide clear documentation for each lookup entry to aid developers and users in understanding their purpose.\n\n**Examples**\n- Country codes and names (e.g., \"US\" -> \"United States\").\n- Status codes for order processing (e.g., \"PENDING\", \"SHIPPED\", \"DELIVERED\").\n- Currency codes and symbols (e.g., \"USD\" -> \"$\").\n- Product categories or types used in an inventory system.\n\n**Important notes**\n- Lookups should be kept up-to-date to reflect changes in business rules or external standards.\n- Avoid hardcoding lookup values in multiple places; centralize them to maintain consistency.\n- Consider localization requirements if lookup values need to be presented in multiple languages.\n- Validate input data against lookups to prevent invalid or unsupported values from entering the system.\n\n**Dependency chain**\n- Often used by input validation modules, UI dropdowns, reporting tools, and business logic components.\n- May depend on external data sources or configuration files for initialization.\n- Can influence downstream processing and data storage by enforcing standardized values.\n\n**Technical details**\n- Typically implemented as dictionaries, maps, or database tables.\n- May support versioning to track changes over time.\n- Can be exposed via APIs for dynamic retrieval by client applications.\n- Should include mechanisms for synchronization if distributed across multiple systems."},"settingsForm":{"$ref":"#/components/schemas/Form"},"preSave":{"$ref":"#/components/schemas/PreSave"},"settings":{"$ref":"#/components/schemas/Settings"}}},"As2":{"type":"object","description":"Configuration for As2 imports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"The unique identifier for the Trading Partner (TP) Connector utilized within the AS2 communication framework. This identifier serves as a critical reference that links the AS2 configuration to the specific connector responsible for the secure transmission and reception of AS2 messages. It ensures that all message exchanges are accurately routed through the designated connector, thereby maintaining the integrity, security, and reliability of the communication process. This ID is fundamental for establishing, managing, and terminating AS2 sessions, enabling proper encryption, digital signing, and delivery confirmation of messages. It acts as a key element in the orchestration of AS2 workflows, ensuring seamless interoperability between trading partners.\n\n**Field behavior**\n- Uniquely identifies a TP Connector within the trading partner ecosystem.\n- Associates AS2 message exchanges with the appropriate connector configuration.\n- Essential for initiating, maintaining, and terminating AS2 communication sessions.\n- Immutable once set to preserve consistent routing and session management.\n- Validated against existing connectors to prevent misconfiguration.\n\n**Implementation guidance**\n- Must correspond to a valid, active connector ID registered in the trading partner management system.\n- Should be assigned during initial AS2 setup and remain unchanged unless a deliberate reconfiguration is required.\n- Implement validation checks to confirm the ID’s existence and compatibility with AS2 protocols before assignment.\n- Ensure synchronization between the connector ID and its configuration to avoid communication failures.\n- Handle updates cautiously, with appropriate notifications and fallback mechanisms to maintain session continuity.\n\n**Examples**\n- \"connector-12345\"\n- \"tp-connector-67890\"\n- \"as2-connector-001\"\n\n**Important notes**\n- Incorrect or invalid IDs can lead to failed message transmissions or misrouted communications.\n- The referenced connector must be fully configured to support AS2 standards, including security and protocol settings.\n- Changes to this identifier should be managed through controlled processes to prevent disruption of active AS2 sessions.\n- This ID is integral to the AS2 message lifecycle, impacting message encryption, signing, and delivery confirmation.\n\n**Dependency chain**\n- Relies on the existence and proper configuration of a TP Connector entity within the trading partner management system.\n- Interacts closely with AS2 configuration parameters that govern message handling, security credentials, and session management.\n- Dependent on system components responsible for connector registration, validation, and lifecycle management.\n\n**Technical details**\n- Represented as a string conforming to the identifier format defined by the trading partner management system.\n- Used internally by the AS2"},"fileNameTemplate":{"type":"string","description":"fileNameTemplate specifies the template string used to dynamically generate file names for AS2 message payloads during both transmission and receipt processes. This template allows the inclusion of customizable placeholders that are replaced at runtime with relevant contextual values such as timestamps, unique message identifiers, sender and receiver IDs, or other metadata. By leveraging these placeholders, the system ensures that generated file names are unique, descriptive, and meaningful, facilitating efficient file management, traceability, and downstream processing. The template is crucial for systematically organizing files, preventing naming conflicts, supporting audit trails, and enabling seamless integration with various file storage and transfer mechanisms.\n\n**Field behavior**\n- Defines the naming pattern and structure for AS2 message payload files.\n- Supports dynamic substitution of placeholders with runtime metadata values.\n- Ensures uniqueness and clarity in generated file names to prevent collisions.\n- Directly influences file storage organization, retrieval efficiency, and processing workflows.\n- Affects audit trails and logging by enforcing consistent and meaningful file naming conventions.\n- Applies uniformly to both outgoing and incoming AS2 message payloads to maintain consistency.\n- Allows flexibility to adapt file naming to organizational standards and compliance requirements.\n\n**Implementation guidance**\n- Use clear, descriptive placeholders such as {timestamp}, {messageId}, {senderId}, {receiverId}, and {date} to capture relevant metadata.\n- Validate templates to exclude invalid or unsupported characters based on the target file system’s constraints.\n- Incorporate date and time components to improve uniqueness and enable chronological sorting of files.\n- Explicitly include file extensions (e.g., .xml, .edi, .txt) to reflect the payload format and ensure compatibility with downstream systems.\n- Provide sensible default templates to maintain system functionality when custom templates are not specified.\n- Implement robust parsing and substitution logic to reliably handle all supported placeholders and edge cases.\n- Sanitize substituted values to prevent injection of unsafe, invalid, or reserved characters that could cause errors.\n- Consider locale and timezone settings when generating date/time placeholders to ensure consistency across systems.\n- Support escape sequences or delimiter customization to handle special characters within templates if necessary.\n\n**Examples**\n- \"AS2_{senderId}_{timestamp}.xml\"\n- \"{messageId}_{receiverId}_{date}.edi\"\n- \"payload_{timestamp}.txt\"\n- \"msg_{date}_{senderId}_{receiverId}_{messageId}.xml\"\n- \"AS2_{timestamp}_{messageId}.payload\"\n- \"inbound_{receiverId}_{date}_{messageId}.xml\"\n-"},"messageIdTemplate":{"type":"string","description":"messageIdTemplate is a customizable template string designed to generate the Message-ID header for AS2 messages, enabling the creation of unique, meaningful, and standards-compliant identifiers for each message. This template supports dynamic placeholders that are replaced with actual runtime values—such as timestamps, UUIDs, sequence numbers, sender-specific information, or other contextual data—ensuring that every Message-ID is globally unique and strictly adheres to RFC 5322 email header formatting standards. By defining the precise format of the Message-ID, this property plays a critical role in message tracking, logging, troubleshooting, auditing, and interoperability within AS2 communication workflows, facilitating reliable message correlation and lifecycle management.\n\n**Field behavior**\n- Specifies the exact format and structure of the Message-ID header in AS2 protocol messages.\n- Supports dynamic placeholders that are substituted with real-time values during message creation.\n- Guarantees global uniqueness of each Message-ID to prevent duplication and enable accurate message identification.\n- Directly influences message correlation, tracking, diagnostics, and auditing processes across AS2 systems.\n- Ensures compliance with email header syntax and AS2 protocol requirements to maintain interoperability.\n- Affects downstream processing by systems that rely on Message-ID for message lifecycle management.\n\n**Implementation guidance**\n- Use a clear and consistent placeholder syntax (e.g., `${placeholderName}`) for dynamic content insertion.\n- Incorporate unique elements such as UUIDs, timestamps, sequence numbers, sender domains, or other identifiers to guarantee global uniqueness.\n- Validate the generated Message-ID string against AS2 protocol specifications and RFC 5322 email header standards.\n- Ensure the final Message-ID is properly formatted, enclosed in angle brackets (`< >`), and free from invalid or disallowed characters.\n- Avoid embedding sensitive or confidential information within the Message-ID to mitigate potential security risks.\n- Thoroughly test the template to confirm compatibility with receiving AS2 systems and message processing components.\n- Consider the impact of the template on downstream systems that rely on Message-ID for message correlation and tracking.\n- Maintain consistency in template usage across environments to support reliable message auditing and troubleshooting.\n\n**Examples**\n- `<${uuid}@example.com>` — generates a Message-ID combining a UUID with a domain name.\n- `<${timestamp}.${sequence}@as2sender.com>` — includes a timestamp and sequence number for ordered uniqueness.\n- `<msg-${messageNumber}@${senderDomain}>` — uses message number and sender domain placeholders for traceability.\n- `<${date}-${uuid}@company"},"maxRetries":{"type":"number","description":"Maximum number of retry attempts for the AS2 message transmission in case of transient failures such as network errors, timeouts, or temporary unavailability of the recipient system. This setting controls how many times the system will automatically attempt to resend a message after an initial failure before marking the transmission as failed. It applies exclusively to retry attempts following the first transmission and does not influence the initial send operation.\n\n**Field behavior**\n- Specifies the total count of resend attempts after the initial transmission failure.\n- Retries are triggered only for recoverable, transient errors where subsequent attempts have a reasonable chance of success.\n- Retry attempts cease once the configured maximum number is reached, resulting in a final failure status.\n- Does not affect the initial message send; governs only the retry logic after the first failure.\n- Retry attempts are typically spaced out using backoff strategies to prevent overwhelming the network or recipient system.\n- Each retry is initiated only after the previous attempt has conclusively failed.\n- The retry mechanism respects configured backoff intervals and timing constraints to optimize network usage and avoid congestion.\n\n**Implementation guidance**\n- Select a balanced default value to avoid excessive retries that could cause delays or strain system resources.\n- Implement exponential backoff or incremental delay strategies between retries to reduce network congestion and improve success rates.\n- Ensure retry attempts comply with overall operation timeouts and do not exceed system or protocol limits.\n- Validate input to accept only non-negative integers; setting zero disables retries entirely.\n- Integrate retry logic with error detection, logging, and alerting mechanisms for comprehensive failure handling and monitoring.\n- Consider the impact of retries on message ordering and idempotency to prevent duplicate processing or inconsistent states.\n- Provide clear feedback and detailed logging for each retry attempt to facilitate troubleshooting and operational transparency.\n\n**Examples**\n- maxRetries: 3 — The system will attempt to resend the message up to three times after the initial failure before giving up.\n- maxRetries: 0 — No retries will be performed; the failure is reported immediately after the first unsuccessful attempt.\n- maxRetries: 5 — Allows up to five retry attempts, increasing the chance of successful delivery in unstable network conditions.\n- maxRetries: 1 — A single retry attempt is made, suitable for environments with low tolerance for delays.\n\n**Important notes**\n- Excessive retry attempts can increase latency and consume additional system and network resources.\n- Retry behavior should align with AS2 protocol standards and best practices to maintain compliance and interoperability.\n- This setting improves"},"headers":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the MIME type of the AS2 message content, indicating the format of the data being transmitted. This helps the receiving system understand how to process and interpret the message payload.\n\n**Field behavior**\n- Defines the content type of the AS2 message payload.\n- Used by the receiver to determine how to parse and handle the message.\n- Typically corresponds to standard MIME types such as \"application/edi-x12\", \"application/edi-consent\", or \"application/xml\".\n- May influence processing rules, such as encryption, compression, or validation steps.\n\n**Implementation guidance**\n- Ensure the value is a valid MIME type string.\n- Use standard MIME types relevant to AS2 transmissions.\n- Avoid custom or non-standard MIME types unless both sender and receiver explicitly support them.\n- Validate the type value against expected content formats in the AS2 communication agreement.\n\n**Examples**\n- \"application/edi-x12\"\n- \"application/edi-consent\"\n- \"application/xml\"\n- \"text/plain\"\n\n**Important notes**\n- The type field is critical for interoperability between AS2 trading partners.\n- Incorrect or missing type values can lead to message processing errors or rejection.\n- This field is part of the AS2 message headers and should conform to AS2 protocol specifications.\n\n**Dependency chain**\n- Depends on the content of the AS2 message payload.\n- Influences downstream processing components that handle message parsing and validation.\n- May be linked with other headers such as content-transfer-encoding or content-disposition.\n\n**Technical details**\n- Represented as a string in MIME type format.\n- Included in the AS2 message headers under the \"Content-Type\" header.\n- Should comply with RFC 2045 and RFC 2046 MIME standards."},"value":{"type":"string","description":"The value of the AS2 header, representing the content associated with the specified header name.\n\n**Field behavior**\n- Holds the actual content or data for the AS2 header identified by the corresponding header name.\n- Can be a string or any valid data type supported by the AS2 protocol for header values.\n- Used during the construction or parsing of AS2 messages to specify header details.\n\n**Implementation guidance**\n- Ensure the value conforms to the expected format and encoding for AS2 headers.\n- Validate the value against any protocol-specific constraints or length limits.\n- When setting this value, consider the impact on message integrity and compliance with AS2 standards.\n- Support dynamic assignment to accommodate varying header requirements.\n\n**Examples**\n- \"application/edi-x12\"\n- \"12345\"\n- \"attachment; filename=\\\"invoice.xml\\\"\"\n- \"UTF-8\"\n\n**Important notes**\n- The value must be compatible with the corresponding header name to maintain protocol correctness.\n- Incorrect or malformed values can lead to message rejection or processing errors.\n- Some headers may require specific formatting or encoding (e.g., MIME types, character sets).\n\n**Dependency chain**\n- Dependent on the `as2.headers.name` property which specifies the header name.\n- Influences the overall AS2 message headers structure and processing logic.\n\n**Technical details**\n- Typically represented as a string in the AS2 message headers.\n- May require encoding or escaping depending on the header content.\n- Should comply with RFC 4130 and related AS2 specifications for header formatting."},"_id":{"type":"object","description":"Unique identifier for the AS2 message header.\n\n**Field behavior**\n- Serves as a unique key to identify the AS2 message header within the system.\n- Typically assigned automatically by the system or provided by the client to ensure message traceability.\n- Used to correlate messages and track message processing status.\n\n**Implementation guidance**\n- Should be a string value that uniquely identifies the header.\n- Must be immutable once assigned to prevent inconsistencies.\n- Should follow a consistent format (e.g., UUID, GUID) to avoid collisions.\n- Ensure uniqueness across all AS2 message headers in the system.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"AS2Header_001\"\n- \"msg-header-20240427-0001\"\n\n**Important notes**\n- This field is critical for message tracking and auditing.\n- Avoid using easily guessable or sequential values to enhance security.\n- If not provided, the system should generate a unique identifier automatically.\n\n**Dependency chain**\n- May be referenced by other components or logs that track message processing.\n- Related to message metadata and processing status fields.\n\n**Technical details**\n- Data type: string\n- Format: typically UUID or a unique string identifier\n- Immutable after creation\n- Indexed for efficient lookup in databases or message stores"},"default":{"type":["object","null"],"description":"A boolean property that specifies whether the current header should be used as the default header in the AS2 message configuration. When set to true, this header will be applied automatically unless overridden by a more specific header setting.\n\n**Field behavior**\n- Determines if the header is the default choice for AS2 message headers.\n- If true, this header is applied automatically in the absence of other specific header configurations.\n- If false or omitted, the header is not considered the default and must be explicitly specified to be used.\n\n**Implementation guidance**\n- Use this property to designate a fallback or standard header configuration for AS2 messages.\n- Ensure only one header is marked as default to avoid conflicts.\n- Validate the boolean value to accept only true or false.\n- Consider the impact on message routing and processing when setting this property.\n\n**Examples**\n- `default: true` — This header is the default and will be used unless another header is specified.\n- `default: false` — This header is not the default and must be explicitly selected.\n\n**Important notes**\n- Setting multiple headers as default may lead to unpredictable behavior.\n- The default header is applied only when no other header overrides are present.\n- This property is optional; if omitted, the header is not treated as default.\n\n**Dependency chain**\n- Related to other header properties within `as2.headers`.\n- Influences message header selection logic in AS2 message processing.\n- May interact with routing or profile configurations that specify header usage.\n\n**Technical details**\n- Data type: boolean\n- Default value: false (if omitted)\n- Located under the `as2.headers` object in the configuration schema\n- Used during AS2 message construction to determine header application"}},"description":"A collection of HTTP headers included in the AS2 message transmission, serving as essential metadata and control information that governs the accurate handling, routing, authentication, and processing of the AS2 message between trading partners. These headers must strictly adhere to standard HTTP header formats and encoding rules, encompassing mandatory fields such as Content-Type, Message-ID, AS2-From, AS2-To, and other protocol-specific headers critical for AS2 communication. Proper configuration and validation of these headers are vital to maintaining message integrity, security (including encryption and digital signatures), and ensuring successful interoperability within the AS2 ecosystem. This collection supports both standard and custom headers as required by specific trading partner agreements or business needs, with header names treated as case-insensitive per HTTP standards.  \n**Field behavior:**  \n- Represents key-value pairs of HTTP headers transmitted alongside the AS2 message.  \n- Directly influences message routing, identification, authentication, security, and processing by the receiving system.  \n- Supports inclusion of both mandatory AS2 headers and additional custom headers as dictated by trading partner agreements or specific business requirements.  \n- Header names are case-insensitive, consistent with HTTP/1.1 specifications.  \n- Facilitates communication of protocol-specific instructions, security parameters (e.g., MIC, Disposition-Notification-To), and message metadata essential for AS2 transactions.  \n- Enables conveying information necessary for message encryption, signing, and receipt acknowledgments.  \n**Implementation guidance:**  \n- Ensure all header names and values conform strictly to HTTP/1.1 header syntax, character encoding, and escaping rules.  \n- Include all mandatory AS2 headers such as AS2-From, AS2-To, Message-ID, Content-Type, and security-related headers like MIC and Disposition-Notification-To.  \n- Avoid duplicate headers unless explicitly allowed by the AS2 specification or trading partner agreements.  \n- Validate header values rigorously for correctness, format compliance, and security to prevent injection attacks or protocol violations.  \n- Consider the impact of headers on message security features including encryption, digital signatures, and receipt acknowledgments.  \n- Maintain consistency with trading partner agreements regarding required, optional, and custom headers to ensure seamless interoperability.  \n- Use appropriate character encoding and escape sequences to handle special or non-ASCII characters within header values.  \n**Examples:**  \n- Content-Type: application/edi-x12  \n- AS2-From: PartnerA  \n- AS2-To: PartnerB  \n- Message-ID: <"}}},"Dynamodb":{"type":"object","description":"Configuration for Dynamodb exports","properties":{"region":{"type":"string","description":"Specifies the AWS region where the DynamoDB service is hosted and where all database operations will be executed. This property defines the precise geographical location of the DynamoDB instance, which is essential for optimizing application latency, ensuring compliance with data residency and sovereignty regulations, and aligning with organizational infrastructure strategies. Selecting the appropriate region helps minimize network latency by placing data physically closer to end-users or application servers, thereby improving performance and responsiveness. Additionally, the chosen region influences service availability, pricing structures, feature sets, and legal compliance requirements, making it a critical configuration parameter. The region value must be a valid AWS region identifier (such as \"us-east-1\" or \"eu-west-1\") and is used to construct the correct service endpoint URL for all DynamoDB interactions. Proper configuration of this property is vital for seamless integration with AWS SDKs, authentication mechanisms, and IAM policies that may be region-specific, ensuring secure and efficient access to DynamoDB resources.\n\n**Field behavior**\n- Determines the AWS data center location for all DynamoDB operations.\n- Directly impacts network latency, data residency, and compliance adherence.\n- Must be specified as a valid and supported AWS region code.\n- Influences the endpoint URL, authentication endpoints, and service routing.\n- Affects service availability, pricing, feature availability, and latency.\n- Remains consistent throughout the lifecycle of the application unless explicitly changed.\n\n**Implementation guidance**\n- Validate the region against the current list of supported AWS regions to prevent misconfiguration and runtime errors.\n- Use the region value to dynamically construct the DynamoDB service endpoint URL and configure SDK clients accordingly.\n- Allow the region to be configurable and overrideable to support testing environments, multi-region deployments, disaster recovery, or failover scenarios.\n- Ensure consistency of the region setting across all AWS service configurations within the application to avoid cross-region conflicts and data inconsistency.\n- Consider the implications of region selection on data sovereignty laws, compliance requirements, and organizational policies.\n- Monitor AWS announcements for new regions or changes in service availability to keep configurations up to date.\n\n**Examples**\n- \"us-east-1\" (Northern Virginia, USA)\n- \"eu-west-1\" (Ireland, Europe)\n- \"ap-southeast-2\" (Sydney, Australia)\n- \"ca-central-1\" (Canada Central)\n- \"sa-east-1\" (São Paulo, Brazil)\n\n**Important notes**\n- Selecting the correct region is crucial for optimizing application performance, cost efficiency,"},"method":{"type":"string","enum":["putItem","updateItem"],"description":"Method specifies the HTTP method to be used for the DynamoDB API request, such as GET, POST, PUT, DELETE, etc. This determines the type of operation to be performed on the DynamoDB service and how the request is processed. It defines the HTTP verb that dictates the action on the resource, typically aligning with CRUD operations (Create, Read, Update, Delete). While DynamoDB primarily uses POST requests for most operations, other methods may be applicable in limited scenarios. The chosen method directly impacts the request structure, headers, payload formatting, and response handling, ensuring that the request conforms to the expected protocol and operation semantics.\n\n**Field behavior**\n- Defines the HTTP verb for the API call, influencing the nature and intent of the operation.\n- Determines how the DynamoDB service interprets and processes the incoming request.\n- Typically corresponds to CRUD operations but is predominantly POST for DynamoDB API interactions.\n- Must be compatible with the DynamoDB API endpoint and the specific operation requirements.\n- Influences the construction of the HTTP request line, headers, and payload format.\n- Affects response handling and error processing based on the HTTP method semantics.\n\n**Implementation guidance**\n- Validate that the method value is one of the HTTP methods supported by DynamoDB, with POST being the standard and recommended method.\n- Use POST for the majority of DynamoDB operations, including queries, updates, and inserts.\n- Ensure the selected method aligns precisely with the intended DynamoDB operation to avoid protocol mismatches.\n- Handle method-specific headers, payload formats, and any protocol nuances, such as content-type and authorization headers.\n- Avoid unsupported or non-standard HTTP methods to prevent request failures or unexpected behavior.\n- Implement robust error handling for cases where the method is incompatible with the requested operation.\n\n**Examples**\n- POST (commonly used and recommended for all DynamoDB API operations)\n- GET (rarely used, potentially for specific read-only or diagnostic operations, though not standard for DynamoDB)\n- DELETE (used for deleting items or tables in DynamoDB, though typically encapsulated within POST requests)\n- PUT (generally not used in the DynamoDB API context but common in other RESTful APIs)\n\n**Important notes**\n- DynamoDB API predominantly supports POST; other HTTP methods may be unsupported or have limited applicability.\n- Using an incorrect HTTP method can cause request rejection, errors, or unintended side effects.\n- The method must strictly adhere to the DynamoDB API specification and AWS service requirements to ensure successful communication"},"tableName":{"type":"string","description":"The name of the DynamoDB table to be accessed or manipulated, serving as the primary identifier for routing API requests to the correct data store. This string must exactly match the name of an existing DynamoDB table within the specified AWS account and region, including case sensitivity. It is essential for performing operations such as reading, writing, updating, or deleting items within the table, ensuring that all data interactions target the intended resource accurately.\n\n**Field behavior**\n- Specifies the exact DynamoDB table targeted for all data operations.\n- Must be unique within the AWS account and region to prevent ambiguity.\n- Directs API calls to the appropriate table for the requested action.\n- Case-sensitive and must strictly adhere to DynamoDB naming conventions.\n- Acts as a critical routing parameter in API requests to identify the data source.\n\n**Implementation guidance**\n- Confirm the table name matches the existing DynamoDB table in AWS, respecting case sensitivity.\n- Avoid using reserved words or disallowed special characters to prevent API errors.\n- Validate the table name format programmatically before making API calls.\n- Manage table names through environment variables or configuration files to facilitate deployment across multiple environments (development, staging, production).\n- Ensure all application components referencing the table name are updated consistently if the table name changes.\n- Incorporate error handling to manage cases where the specified table does not exist or is inaccessible.\n\n**Examples**\n- \"Users\"\n- \"Orders2024\"\n- \"Inventory_Table\"\n- \"CustomerData\"\n- \"Sales-Data.2024\"\n\n**Important notes**\n- DynamoDB table names can be up to 255 characters in length.\n- Table names are case-sensitive and must be used consistently across all API calls.\n- The specified table must exist in the targeted AWS region before any operation is performed.\n- Changing the table name requires updating all dependent code and redeploying applications as necessary.\n- Incorrect table names will result in API errors such as ResourceNotFoundException.\n\n**Dependency chain**\n- Dependent on the AWS account and region context where DynamoDB is accessed.\n- Requires appropriate IAM permissions to perform operations on the specified table.\n- Works in conjunction with other DynamoDB parameters such as key schema, attribute definitions, and provisioned throughput settings.\n- Relies on network connectivity and AWS service availability for successful API interactions.\n\n**Technical details**\n- Represented as a string value.\n- Must conform to DynamoDB naming rules: allowed characters include alphanumeric characters, underscore (_), hy"},"partitionKey":{"type":"string","description":"partitionKey specifies the attribute name used as the primary partition key in a DynamoDB table. This key is essential for uniquely identifying each item within the table and determines the partition where the data is physically stored. It plays a fundamental role in data distribution, scalability, and query efficiency by enabling DynamoDB to hash the key value and allocate items across multiple storage partitions. The partition key must be present in every item and is required for all operations involving item creation, retrieval, update, and deletion. Selecting an appropriate partition key with high cardinality ensures even data distribution, prevents performance bottlenecks, and supports efficient query patterns. When combined with an optional sort key, it forms a composite primary key that enables more complex data models and range queries.\n\n**Field behavior**\n- Defines the primary attribute used to partition and organize data in a DynamoDB table.\n- Must have unique values within the context of the partition to ensure item uniqueness.\n- Drives the distribution of data across partitions to optimize performance and scalability.\n- Required for all item-level operations such as put, get, update, and delete.\n- Works in tandem with an optional sort key to form a composite primary key for more complex data models.\n\n**Implementation guidance**\n- Select an attribute with a high cardinality (many distinct values) to promote even data distribution and avoid hot partitions.\n- Ensure the partition key attribute is included in every item stored in the table.\n- Analyze application access patterns to choose a partition key that supports efficient queries and minimizes read/write contention.\n- Use a composite key (partition key + sort key) when you need to model one-to-many relationships or perform range queries.\n- Avoid using attributes with low variability or skewed distributions as partition keys to prevent performance bottlenecks.\n\n**Examples**\n- \"userId\" for applications where data is organized by individual users.\n- \"orderId\" in an e-commerce system to uniquely identify each order.\n- \"deviceId\" for storing telemetry data from IoT devices.\n- \"customerEmail\" when email addresses serve as unique identifiers.\n- \"regionCode\" combined with a sort key for geographic data partitioning.\n\n**Important notes**\n- The partition key is mandatory and immutable after table creation; changing it requires creating a new table.\n- The attribute type must be one of DynamoDB’s supported scalar types: String, Number, or Binary.\n- Partition key values are hashed internally by DynamoDB to determine the physical partition.\n- Using a poorly chosen partition key"},"sortKey":{"type":"string","description":"The attribute name designated as the sort key (also known as the range key) in a DynamoDB table. This key is essential for ordering and organizing items that share the same partition key, enabling efficient sorting, range queries, and precise retrieval of data within a partition. Together with the partition key, the sort key forms the composite primary key that uniquely identifies each item in the table. It plays a critical role in query operations by allowing filtering, sorting, and conditional retrieval based on its values, which can be of type String, Number, or Binary. Proper definition and usage of the sort key significantly enhance data access patterns, query performance, and cost efficiency in DynamoDB.\n\n**Field behavior**\n- Defines the attribute DynamoDB uses to sort and organize items within the same partition key.\n- Enables efficient range queries, ordered retrieval, and filtering of items.\n- Must be unique in combination with the partition key for each item to ensure item uniqueness.\n- Supports composite key structures when combined with the partition key.\n- Influences the physical storage order of items within a partition, optimizing query performance.\n\n**Implementation guidance**\n- Specify the attribute name exactly as defined in the DynamoDB table schema.\n- Ensure the attribute type matches the table definition (String, Number, or Binary).\n- Use this key to perform queries requiring sorting, range-based filtering, or conditional retrieval.\n- Omit or set to null if the DynamoDB table does not define a sort key.\n- Validate that the sort key attribute exists and is properly indexed in the table schema before use.\n- Consider the sort key design carefully to support intended query patterns and access efficiency.\n\n**Examples**\n- \"timestamp\"\n- \"createdAt\"\n- \"orderId\"\n- \"category#date\"\n- \"eventDate\"\n- \"userScore\"\n\n**Important notes**\n- The sort key is optional; some DynamoDB tables only have a partition key without a sort key.\n- Queries specifying both partition key and sort key conditions are more efficient and cost-effective.\n- The sort key attribute must be pre-defined in the table schema and cannot be added dynamically.\n- The combination of partition key and sort key must be unique for each item.\n- Sort key values influence the physical storage order of items within a partition, impacting query speed.\n- Using a well-designed sort key can reduce the need for additional indexes and improve scalability.\n\n**Dependency chain**\n- Requires a DynamoDB table configured with a composite primary key (partition key"},"itemDocument":{"type":"string","description":"The `itemDocument` property represents the complete and authoritative data record stored within a DynamoDB table. It is structured as a JSON object that comprehensively includes all attribute names and their corresponding values, fully reflecting the schema and data model of the DynamoDB item. This property encapsulates every attribute of the item, including mandatory primary keys, optional sort keys (if applicable), and any additional data fields, supporting complex nested objects, maps, lists, and arrays to accommodate DynamoDB’s rich data types. It serves as the primary representation of an item for all item-level operations such as reading (GetItem), writing (PutItem), updating (UpdateItem), and deleting (DeleteItem) within DynamoDB, ensuring data integrity and consistency throughout these processes.\n\n**Field behavior**\n- Contains the full and exact representation of a DynamoDB item as a structured JSON document.\n- Includes all required attributes such as primary keys and sort keys, as well as any optional or additional fields.\n- Supports complex nested data structures including maps, lists, and sets, consistent with DynamoDB’s flexible data model.\n- Acts as the main payload for CRUD (Create, Read, Update, Delete) operations on DynamoDB items.\n- Reflects either the current stored state or the intended new state of the item depending on the operation context.\n- Can represent partial documents when used with projection expressions or update operations that modify subsets of attributes.\n\n**Implementation guidance**\n- Ensure strict adherence to DynamoDB’s supported data types (String, Number, Binary, Boolean, Null, List, Map, Set) and attribute naming conventions.\n- Validate presence and correct formatting of primary key attributes to uniquely identify the item for key-based operations.\n- For update operations, provide either the entire item document or only the attributes to be modified, depending on the API method and update strategy.\n- Maintain consistent attribute names and data types across operations to preserve schema integrity and prevent runtime errors.\n- Utilize attribute projections or partial documents when full item data is unnecessary to optimize performance, reduce latency, and minimize cost.\n- Handle nested structures carefully to ensure proper serialization and deserialization between application code and DynamoDB.\n\n**Examples**\n- `{ \"UserId\": \"12345\", \"Name\": \"John Doe\", \"Age\": 30, \"Preferences\": { \"Language\": \"en\", \"Notifications\": true } }`\n- `{ \"OrderId\": \"A100\", \"Date\": \"2024-06-01\", \"Items\":"},"updateExpression":{"type":"string","description":"A string that defines the specific attributes to be updated, added, or removed within a DynamoDB item using DynamoDB's UpdateExpression syntax. This expression provides precise control over how item attributes are modified by including one or more of the following clauses: SET (to assign or overwrite attribute values), REMOVE (to delete attributes), ADD (to increment numeric attributes or add elements to sets), and DELETE (to remove elements from sets). The updateExpression enables atomic, conditional, and efficient updates without replacing the entire item, ensuring data integrity and minimizing write costs. It must be carefully constructed using placeholders for attribute names and values—ExpressionAttributeNames and ExpressionAttributeValues—to avoid conflicts with reserved keywords and prevent injection vulnerabilities. This expression is essential for performing complex update operations in a single request, supporting both simple attribute changes and advanced manipulations like incrementing counters or modifying sets.\n\n**Field behavior**\n- Specifies one or more atomic update operations on a DynamoDB item's attributes.\n- Supports combining multiple update actions (SET, REMOVE, ADD, DELETE) within a single expression, separated by spaces.\n- Requires strict adherence to DynamoDB's UpdateExpression syntax and semantics.\n- Works in conjunction with ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values.\n- Only affects the attributes explicitly mentioned; all other attributes remain unchanged.\n- Enables conditional updates when combined with ConditionExpression for safe concurrent modifications.\n\n**Implementation guidance**\n- Construct the updateExpression using valid DynamoDB syntax, incorporating clauses such as SET, REMOVE, ADD, and DELETE as needed.\n- Always use placeholders (e.g., '#attrName' for attribute names and ':value' for attribute values) to avoid conflicts with reserved words and enhance security.\n- Validate the expression syntax and ensure all referenced placeholders are defined before executing the update operation.\n- Combine multiple update actions by separating them with spaces within the same expression string.\n- Test expressions thoroughly to prevent runtime errors caused by syntax issues or missing placeholders.\n- Use SET for assigning or overwriting attribute values, REMOVE for deleting attributes, ADD for incrementing numbers or adding set elements, and DELETE for removing set elements.\n- Avoid using reserved keywords directly; always substitute them with ExpressionAttributeNames placeholders.\n\n**Examples**\n- \"SET #name = :newName, #age = :newAge REMOVE #obsoleteAttribute\"\n- \"ADD #score :incrementValue DELETE #tags :tagToRemove\"\n- \"SET #status = :statusValue\"\n- \"REMOVE #temporaryField"},"conditionExpression":{"type":"string","description":"A string that defines the conditional logic to be applied to a DynamoDB operation, specifying the precise criteria that must be met for the operation to proceed. This expression leverages DynamoDB's condition expression syntax to evaluate attributes of the targeted item, enabling fine-grained control over operations such as PutItem, UpdateItem, DeleteItem, and transactional writes. It supports a comprehensive set of logical operators (AND, OR, NOT), comparison operators (=, <, >, BETWEEN, IN), functions (attribute_exists, attribute_not_exists, begins_with, contains, size), and attribute path notations to construct complex, nested conditions. This ensures data integrity by preventing unintended modifications and enforcing business rules at the database level.\n\n**Field behavior**\n- Determines whether the DynamoDB operation executes based on the evaluation of the condition against the item's attributes.\n- Evaluated atomically before performing the operation; if the condition evaluates to false, the operation is aborted and no changes are made.\n- Supports combining multiple conditions using logical operators for complex, multi-attribute criteria.\n- Utilizes built-in functions to inspect attribute existence, value patterns, and sizes, enabling sophisticated conditional checks.\n- Applies exclusively to the specific item targeted by the operation, including nested attributes accessed via attribute path notation.\n- Influences transactional operations by ensuring all conditions in a transaction are met before committing.\n\n**Implementation guidance**\n- Always use ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values, avoiding conflicts with reserved keywords and improving readability.\n- Validate the syntax of the condition expression prior to request submission to prevent runtime errors and failed operations.\n- Carefully construct expressions to handle edge cases, such as attributes that may be missing or have null values.\n- Combine multiple conditions logically to enforce precise constraints and business logic on the operation.\n- Test condition expressions thoroughly across diverse data scenarios to ensure expected behavior and robustness.\n- Keep expressions as simple and efficient as possible to optimize performance and reduce request complexity.\n\n**Examples**\n- \"attribute_exists(#name) AND #age >= :minAge\"\n- \"#status = :activeStatus\"\n- \"attribute_not_exists(#id)\"\n- \"#score BETWEEN :low AND :high\"\n- \"begins_with(#title, :prefix)\"\n- \"contains(#tags, :tagValue) OR size(#comments) > :minComments\"\n\n**Important notes**\n- The conditionExpression is optional but highly recommended to safeguard against unintended overwrites, deletions, or updates.\n- If the condition"},"expressionAttributeNames":{"type":"string","description":"expressionAttributeNames is a map of substitution tokens used to safely reference attribute names within DynamoDB expressions. This property allows you to define placeholder tokens for attribute names that might otherwise cause conflicts due to being reserved words, containing special characters, or having names that are dynamically generated. By using these placeholders, you can avoid syntax errors and ambiguities in expressions such as ProjectionExpression, FilterExpression, UpdateExpression, and ConditionExpression, ensuring your queries and updates are both valid and maintainable.\n\n**Field behavior**\n- Functions as a dictionary where each key is a placeholder token starting with a '#' character, and each value is the actual attribute name it represents.\n- Enables the use of reserved words, special characters, or otherwise problematic attribute names within DynamoDB expressions.\n- Supports complex expressions by allowing multiple attribute name substitutions within a single expression.\n- Prevents syntax errors and improves clarity by explicitly mapping placeholders to attribute names.\n- Applies only within the scope of the expression where it is defined, ensuring localized substitution without affecting other parts of the request.\n\n**Implementation guidance**\n- Always prefix keys with a '#' to clearly indicate they are substitution tokens.\n- Ensure each placeholder token is unique within the scope of the expression to avoid conflicts.\n- Use expressionAttributeNames whenever attribute names are reserved words, contain special characters, or are dynamically generated.\n- Combine with expressionAttributeValues when expressions require both attribute name and value substitutions.\n- Verify that the attribute names used as values exactly match those defined in your DynamoDB table schema to prevent runtime errors.\n- Keep the mapping concise and relevant to the attributes used in the expression to maintain readability.\n- Avoid overusing substitution tokens for attribute names that do not require it, to keep expressions straightforward.\n\n**Examples**\n- `{\"#N\": \"Name\", \"#A\": \"Age\"}` to substitute attribute names \"Name\" and \"Age\" in an expression.\n- `{\"#status\": \"Status\"}` when \"Status\" is a reserved word or needs escaping in an expression.\n- `{\"#yr\": \"Year\", \"#mn\": \"Month\"}` for expressions involving multiple date-related attributes.\n- `{\"#addr\": \"Address\", \"#zip\": \"ZipCode\"}` to handle attribute names with special characters or spaces.\n- `{\"#P\": \"Price\", \"#Q\": \"Quantity\"}` used in an UpdateExpression to increment numeric attributes.\n\n**Important notes**\n- Keys must always begin with the '#' character; otherwise, the substitution"},"expressionAttributeValues":{"type":"string","description":"expressionAttributeValues is a map of substitution tokens for attribute values used within DynamoDB expressions. These tokens serve as placeholders that safely inject values into expressions such as ConditionExpression, FilterExpression, UpdateExpression, and KeyConditionExpression. By separating data from code, this mechanism prevents injection attacks and syntax errors, ensuring secure and reliable query and update operations. Each key in this map is a placeholder string that begins with a colon (\":\"), and each corresponding value is a DynamoDB attribute value object representing the actual data to be used in the expression. This design enforces that user input is never directly embedded into expressions, enhancing both security and maintainability.\n\n**Field behavior**\n- Contains key-value pairs where keys are placeholders prefixed with a colon (e.g., \":val1\") used within DynamoDB expressions.\n- Each key maps to a DynamoDB attribute value object specifying the data type and value, supporting all DynamoDB data types including strings, numbers, binaries, lists, maps, sets, and booleans.\n- Enables safe and dynamic substitution of attribute values in expressions, avoiding direct string concatenation and reducing the risk of injection vulnerabilities.\n- Mandatory when expressions reference attribute values to ensure correct parsing and execution of queries or updates.\n- Placeholders are case-sensitive and must exactly match those used in the corresponding expressions.\n- Supports complex data structures, allowing nested maps and lists as attribute values for advanced querying scenarios.\n\n**Implementation guidance**\n- Always prefix placeholder keys with a colon (\":\") to clearly identify them as substitution tokens within expressions.\n- Ensure every placeholder referenced in an expression has a corresponding entry in expressionAttributeValues to avoid runtime errors.\n- Format each value according to DynamoDB’s attribute value specification, such as { \"S\": \"string\" }, { \"N\": \"123\" }, { \"BOOL\": true }, or complex nested structures.\n- Avoid using reserved words, spaces, or invalid characters in placeholder names to prevent syntax errors and conflicts.\n- Validate that the data types of values align with the expected attribute types defined in the table schema to maintain data integrity.\n- When multiple expressions are used simultaneously, maintain unique and consistent placeholder names to prevent collisions and ambiguity.\n- Use expressionAttributeValues in conjunction with expressionAttributeNames when attribute names are reserved words or contain special characters to ensure proper expression parsing.\n- Consider the size limitations of expressionAttributeValues to avoid exceeding DynamoDB’s request size limits.\n\n**Examples**\n- { \":val1\": { \"S\": \""},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from DynamoDB data should be ignored or skipped during the operation, enabling conditional control over data retrieval to optimize performance, reduce latency, or avoid redundant processing in data workflows. This flag allows systems to bypass the extraction step when data is already available, cached, or deemed unnecessary for the current operation, thereby improving efficiency and resource utilization.  \n**Field** BEHAVIOR:  \n- When set to true, the extraction of data from DynamoDB is completely bypassed, preventing any data retrieval from the source.  \n- When set to false or omitted, the extraction process proceeds as normal, retrieving data for further processing.  \n- Primarily used to optimize workflows by skipping unnecessary extraction steps when data is already available or not needed.  \n- Influences the flow of data processing by controlling whether DynamoDB data is fetched or not.  \n- Affects downstream processing by potentially providing no new data input if extraction is skipped.  \n**Implementation guidance:**  \n- Accepts a boolean value: true to skip extraction, false to perform extraction.  \n- Ensure that downstream components can handle scenarios where no data is extracted due to this flag being true.  \n- Validate input to avoid unintended skipping of extraction that could cause data inconsistencies or stale results.  \n- Use this flag judiciously, especially in complex pipelines where data dependencies exist, to prevent breaking data integrity.  \n- Incorporate appropriate logging or monitoring to track when extraction is skipped for auditability and debugging.  \n**Examples:**  \n- ignoreExtract: true — extraction from DynamoDB is skipped, useful when data is cached or pre-fetched.  \n- ignoreExtract: false — extraction occurs normally, retrieving fresh data from DynamoDB.  \n- omit the property entirely — defaults to false, extraction proceeds as usual.  \n**Important notes:**  \n- Skipping extraction may result in incomplete, outdated, or stale data if subsequent operations rely on fresh DynamoDB data.  \n- This option should only be enabled when you are certain that extraction is unnecessary or redundant to avoid data inconsistencies.  \n- Consider the impact on data integrity, consistency, and downstream processing before enabling this flag.  \n- Misuse can lead to errors, unexpected behavior, or data quality issues in data workflows.  \n- Ensure that any caching or alternative data sources used in place of extraction are reliable and up-to-date.  \n**Dependency chain:**  \n- Requires a valid DynamoDB data source configuration to be relevant.  \n-"}}},"Http":{"type":"object","description":"Configuration for HTTP imports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the import to function properly. This is a required configuration\nfor all HTTP based imports, as determined by the connection associated with the import.\n","properties":{"formType":{"type":"string","enum":["assistant","http","rest","graph_ql","assistant_graphql"],"description":"The form type to use for the import.\n- **assistant**: The import is for a system that we have a assistant form for.  This is the default value if there is a connector available for the system.\n- **http**: The import is an HTTP import.\n- **rest**: This value is only used for legacy imports that are not supported by the new HTTP framework.\n- **graph_ql**: The import is a GraphQL import.\n- **assistant_graphql**: The import is for a system that we have a assistant form for and utilizes GraphQL.  This is the default value if there is a connector available for the system and the connector supports GraphQL.\n"},"type":{"type":"string","enum":["file","records"],"description":"Specifies the type of data being imported via HTTP.\n\n- **file**: The import handles raw file content (binary data, PDF, images, etc.)\n- **records**: The import handles structured record data (JSON objects, XML records, etc.) - most common\n\nMost HTTP imports use \"records\" type for standard REST API data imports.\nOnly use \"file\" type when importing raw file content that will be processed downstream.\n"},"requestMediaType":{"type":"string","enum":["xml","json","csv","urlencoded","form-data","octet-stream","plaintext"],"description":"Specifies the media type (content type) of the request body sent to the target API.\n\n- **json**: For JSON request bodies (Content-Type: application/json) - most common\n- **xml**: For XML request bodies (Content-Type: application/xml)\n- **csv**: For CSV data (Content-Type: text/csv)\n- **urlencoded**: For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- **form-data**: For multipart form data\n- **octet-stream**: For binary data\n- **plaintext**: For plain text content\n"},"_httpConnectorEndpointIds":{"type":"array","items":{"type":"string","format":"objectId"},"description":"Array of HTTP connector endpoint IDs to use for this import.\nMultiple endpoints can be specified for different request types or operations.\n"},"blobFormat":{"type":"string","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"],"description":"Character encoding format for blob/binary data imports.\nOnly relevant when type is \"file\" or when handling binary content.\n"},"batchSize":{"type":"integer","description":"Maximum number of records to submit per HTTP request.\n\n- REST services typically use batchSize=1 (one record per request)\n- REST batch endpoints or RPC/XML services can handle multiple records\n- Affects throughput and API rate limiting\n\nDefault varies by API - consult target API documentation for optimal value.\n"},"successMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in successful responses.\n\n- **json**: For JSON responses (most common)\n- **xml**: For XML responses\n- **plaintext**: For plain text responses\n"},"requestType":{"type":"array","items":{"type":"string","enum":["CREATE","UPDATE"]},"description":"Specifies the type of operation(s) this import performs.\n\n- **CREATE**: Creating new records\n- **UPDATE**: Updating existing records\n\nFor composite/upsert imports, specify both operations. The array is positionally\naligned with relativeURI and method arrays — each index maps to one operation.\nThe existingExtract field determines which operation is used at runtime.\n\nExample composite upsert:\n```json\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"method\": [\"PUT\", \"POST\"],\n\"relativeURI\": [\"/customers/{{{data.0.customerId}}}.json\", \"/customers.json\"],\n\"existingExtract\": \"customerId\"\n```\n"},"errorMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in error responses.\n\n- **json**: For JSON error responses (most common)\n- **xml**: For XML error responses\n- **plaintext**: For plain text error messages\n"},"relativeURI":{"type":"array","items":{"type":"string"},"description":"**CRITICAL: This MUST be an array of strings, NOT an object.**\n\n**Handlebars data context:** At runtime, `relativeURI` templates always render against the\n**pre-mapped** record — the original input record that arrives at this import step, before\nthe Import's mapping step is applied. This is intentional: URI construction usually needs\nbusiness identifiers that mappings may rename or remove.\n\nArray of relative URI path(s) for the import endpoint.\nCan include handlebars expressions like `{{field}}` for dynamic values.\n\n**Single-operation imports (one element)**\n```json\n\"relativeURI\": [\"/api/v1/customers\"]\n```\n\n**Composite/upsert imports (multiple elements — positionally aligned)**\nFor imports with both CREATE and UPDATE in requestType, relativeURI, method,\nand requestType arrays are **positionally aligned**. Each index corresponds\nto one operation. The existingExtract field determines which index is used at\nruntime: if the existingExtract field has a value → use the UPDATE index;\nif empty/missing → use the CREATE index.\n\nExample of a composite upsert:\n```json\n\"relativeURI\": [\"/customers/{{{data.0.shopifyCustomerId}}}.json\", \"/customers.json\"],\n\"method\": [\"PUT\", \"POST\"],\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"existingExtract\": \"shopifyCustomerId\"\n```\nHere index 0 is the UPDATE operation (PUT to a specific customer) and index 1 is\nthe CREATE operation (POST to the collection endpoint).\n\n**IMPORTANT**: For composite imports, use the `{{{data.0.fieldName}}}` handlebars\nsyntax (triple braces with data.0 prefix) to reference the existing record ID in\nthe UPDATE URI. Do NOT use `{{#if}}` conditionals in a single URI — use separate\narray elements instead.\n\n**Wrong format (DO not do THIS)**\n```json\n\"relativeURI\": {\"/api/v1/customers\": \"CREATE\"}\n```\n```json\n\"relativeURI\": [\"{{#if id}}/customers/{{id}}.json{{else}}/customers.json{{/if}}\"]\n```\n"},"method":{"type":"array","items":{"type":"string","enum":["GET","PUT","POST","PATCH","DELETE"]},"description":"HTTP method(s) used for import requests.\n\n- **POST**: Most common for creating new records\n- **PUT**: For full updates/replacements\n- **PATCH**: For partial updates\n- **DELETE**: For deletions\n- **GET**: Rarely used for imports\n\nFor composite/upsert imports, specify multiple methods positionally aligned\nwith relativeURI and requestType arrays. Each index maps to one operation.\n\nExample: `[\"PUT\", \"POST\"]` with `requestType: [\"UPDATE\", \"CREATE\"]` means\nindex 0 uses PUT for updates, index 1 uses POST for creates.\n"},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector endpoint being used (single endpoint)."},"body":{"type":"array","items":{"type":"object"},"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP imports.\n\nThis is an OPTIONAL field that should only be set in very specific, rare cases. For standard REST API imports\nthat send JSON data, this field MUST be left undefined because data transformation is handled through the\nImport's mapping step, not through this body template.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data imports, including:\n- REST API imports that send JSON records to create/update resources\n- APIs that accept JSON request bodies (the majority of modern APIs)\n- Any import where you want to use Import mappings to transform data\n- Standard CRUD operations (Create, Update, Delete) via REST APIs\n- GraphQL mutations that accept JSON input\n- Any import where the Import mapping step will handle data transformation\n\nExamples of imports that should have this field undefined:\n- \"Import customer data into Shopify\" → undefined (mappings handle the data transformation)\n- \"Create orders via custom REST API\" → undefined (mappings handle the data transformation)\n\n**KEY CONCEPT:** Imports have a dedicated \"Mapping\" step that transforms source data into the target\nformat. This is the standard, preferred approach. When the body field is set, the mapping step still runs\nfirst — the body template then renders against the **post-mapped** record instead of the destination API\nreceiving the mapped record as-is. This lets you post-process the mapped output into XML, SOAP envelopes,\nor other non-JSON shapes, but forces you to hand-template the entire request and is harder to maintain and debug.\n\n**When to set this field (RARE use CASE)**\n\nSet this field ONLY when:\n1. The target API requires XML or SOAP format (not JSON)\n2. You need a highly customized request structure that cannot be achieved through standard mappings\n3. You're working with a legacy API that requires very specific formatting\n4. The API documentation explicitly shows a complex XML/SOAP envelope structure\n5. When the user explicitly specifies a request body\n\nExamples of when to set body:\n- \"Import to SOAP web service\" → set body with XML/SOAP template\n- \"Send data to XML-only legacy API\" → set body with XML template\n- \"Complex SOAP envelope with multiple nested elements\" → set body with SOAP template\n- \"Create an import to /test with the body as {'key': 'value', 'key2': 'value2', ...}\"\n\n**Implementation details**\n\nWhen this field is undefined (default for most imports):\n- The system uses the Import's mapping step to transform data\n- Mappings provide a visual, intuitive way to map source fields to target fields\n- The mapped data is automatically sent as the request body\n- Easier to debug and maintain\n\nWhen this field is set:\n- The mapping step still runs — it produces a post-mapped record that is then fed into this body template\n- The template renders against the **post-mapped** record (use `{{record.<mappedField>}}` to reference mapping outputs)\n- You are responsible for shaping the final request body (XML/SOAP/custom JSON) by wrapping / restructuring the mapped data\n- Harder to maintain and debug than relying on mappings alone\n\n**Decision flowchart**\n\n1. Does the target API accept JSON request bodies?\n   → YES: Leave this field undefined (use mappings instead)\n2. Are you comfortable using the Import's mapping step?\n   → YES: Leave this field undefined\n3. Does the API require XML or SOAP format?\n   → YES: Set this field with appropriate XML/SOAP template\n4. Does the API require a highly unusual request structure that mappings can't handle?\n   → YES: Consider setting this field (but try mappings first)\n\nRemember: When in doubt, leave this field undefined. Almost all modern REST APIs work perfectly\nwith the standard mapping approach, and this is much easier to use and maintain.\n"},"existingExtract":{"type":"string","description":"Field name or JSON path used to determine if a record already exists in the destination,\nfor composite (upsert) imports that have both CREATE and UPDATE request types.\n\nWhen the import has requestType: [\"UPDATE\", \"CREATE\"] (or [\"CREATE\", \"UPDATE\"]):\n- If the field specified by existingExtract has a value in the incoming record,\n  the UPDATE operation (PUT) is used with the corresponding relativeURI and method.\n- If the field is empty or missing, the CREATE operation (POST) is used.\n\nThis field drives the upsert decision and must match a field that is populated by\nan upstream lookup or response mapping (e.g., a destination system ID like \"shopifyCustomerId\").\n\nIMPORTANT: Only set this field for composite/upsert imports that have BOTH CREATE and UPDATE\nin requestType. Do not set for single-operation imports.\n"},"ignoreExtract":{"type":"string","description":"JSON path to the field in the record that is used to determine if the record already exists.\nOnly used when ignoreMissing or ignoreExisting is set.\n"},"endPointBodyLimit":{"type":"integer","description":"Maximum size limit for the request body in bytes.\nUsed to enforce API-specific size constraints.\n"},"headers":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Custom HTTP headers to include with import requests.\n\nEach header has a name and value, which can include handlebars expressions.\n\nAt runtime, header value templates render against the **pre-mapped**\nrecord (the original input record, before the Import's mapping step).\nUse `{{record.<field>}}` to reference fields as they appear in the\nupstream source — mappings don't rename or drop fields for header\nevaluation.\n"},"response":{"type":"object","properties":{"resourcePath":{"type":"array","items":{"type":"string"},"description":"JSON path to the resource collection in the response.\nRequired when batchSize > 1.\n"},"resourceIdPath":{"type":"array","items":{"type":"string"},"description":"JSON path to the ID field in each resource.\nIf not specified, system looks for \"id\" or \"_id\" fields.\n"},"successPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates success.\nUsed for APIs that don't rely solely on HTTP status codes.\n"},"successValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at successPath that indicate success.\n"},"failPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates failure (even if status=200).\n"},"failValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at failPath that indicate failure.\n"},"errorPath":{"type":"string","description":"JSON path to error message in the response.\n"},"allowArrayforSuccessPath":{"type":"boolean","description":"Whether to allow array values for successPath evaluation.\n"},"hasHeader":{"type":"boolean","description":"Indicates if the first record in the response is a header row (for CSV responses).\n"}},"description":"Configuration for parsing and interpreting HTTP responses.\n"},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource for handling asynchronous import operations.\n\nUsed when the target API uses a \"fire-and-check-back\" pattern (HTTP 202, polling, etc.).\n"},"ignoreLookupName":{"type":"string","description":"Name of the lookup to use for checking resource existence.\nOnly used when ignoreMissing or ignoreExisting is set.\n"}}},"Ftp":{"type":"object","description":"Configuration for Ftp exports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Identifier for the third-party connector utilized in the FTP integration, serving as a unique and immutable reference that links the FTP configuration to an external connector service. This identifier is essential for accurately associating the FTP connection with the intended third-party system, enabling seamless and secure data transfer, synchronization, and management through the specified connector. It ensures that all FTP operations are correctly routed and managed via the external service, supporting reliable integration workflows and maintaining system integrity throughout the connection lifecycle.\n\n**Field behavior**\n- Uniquely identifies the third-party connector associated with the FTP configuration.\n- Acts as a key reference to bind FTP settings to a specific external connector service.\n- Required when creating, updating, or managing FTP connections via third-party integrations.\n- Typically immutable after initial assignment to preserve connection consistency.\n- Used by system components to route FTP operations through the correct external connector.\n\n**Implementation guidance**\n- Should be a string or numeric identifier that aligns with the third-party connector’s identification scheme.\n- Must be validated against the system’s registry of available connectors to ensure validity.\n- Should be set securely and protected from unauthorized modification.\n- Updates to this identifier should be handled cautiously, with appropriate validation and impact assessment.\n- Ensure the identifier complies with any formatting or naming conventions imposed by the third-party service.\n- Transmit and store this identifier securely, using encryption where applicable.\n\n**Examples**\n- \"connector_12345\"\n- \"tpConnector-67890\"\n- \"ftpThirdPartyConnector01\"\n- \"extConnector_abcde\"\n- \"thirdPartyFTP_001\"\n\n**Important notes**\n- This field is critical for correctly mapping FTP configurations to their corresponding third-party services.\n- An incorrect or missing _tpConnectorId can lead to connection failures, data misrouting, or integration errors.\n- Synchronization between this identifier and the third-party connector records must be maintained to avoid inconsistencies.\n- Changes to this field may require re-authentication or reconfiguration of the FTP integration.\n- Proper error handling should be implemented to manage cases where the identifier does not match any registered connector.\n\n**Dependency chain**\n- Depends on the existence and registration of third-party connectors within the system.\n- Referenced by FTP connection management modules, authentication services, and integration workflows.\n- Interacts with authorization mechanisms to ensure secure access to third-party services.\n- May influence logging, monitoring, and auditing processes related to FTP integrations.\n\n**Technical details**\n- Data type: string (or"},"directoryPath":{"type":"string","description":"The path to the directory on the FTP server where files will be accessed, stored, or managed. This path can be specified either as an absolute path starting from the root directory of the FTP server or as a relative path based on the user's initial directory upon login, depending on the server's configuration and permissions. It is essential that the path adheres to the FTP server's directory structure, naming conventions, and access controls to ensure successful file operations such as uploading, downloading, listing, or deleting files. The directoryPath serves as the primary reference point for all file-related commands during an FTP session, determining the scope and context of file management activities.\n\n**Field behavior**\n- Defines the target directory location on the FTP server for all file-related operations.\n- Supports both absolute and relative path formats, interpreted according to the FTP server’s setup.\n- Must comply with the server’s directory hierarchy, naming rules, and access permissions.\n- Used by the FTP client to navigate and perform operations within the specified directory.\n- Influences the scope of accessible files and subdirectories during FTP sessions.\n- Changes to this path affect subsequent file operations until updated or reset.\n\n**Implementation guidance**\n- Validate the directory path format to ensure compatibility with the FTP server, typically using forward slashes (`/`) as separators.\n- Normalize the path to remove redundant elements such as `.` (current directory) and `..` (parent directory) to prevent errors and security vulnerabilities.\n- Handle scenarios where the specified directory does not exist by either creating it if permissions allow or returning a clear, descriptive error message.\n- Properly encode special characters and escape sequences in the path to conform with FTP protocol requirements and server expectations.\n- Provide informative and user-friendly error messages when the path is invalid, inaccessible, or lacks sufficient permissions.\n- Sanitize input to prevent directory traversal attacks or unauthorized access to restricted areas.\n- Consider server-specific behaviors such as case sensitivity, symbolic links, and virtual directories when interpreting the path.\n- Ensure that path changes are atomic and consistent to avoid race conditions during concurrent FTP operations.\n\n**Examples**\n- `/uploads/images`\n- `documents/reports/2024`\n- `/home/user/data`\n- `./backup`\n- `../shared/resources`\n- `/var/www/html`\n- `projects/current`\n\n**Important notes**\n- Access to the specified directory depends on the credentials used during FTP authentication.\n- Directory permissions directly affect the ability to read, write, or list files within the"},"fileName":{"type":"string","description":"The name of the file to be accessed, transferred, or manipulated via FTP (File Transfer Protocol). This property specifies the exact filename, including its extension, uniquely identifying the file within the FTP directory. It is critical for accurately targeting the file during operations such as upload, download, rename, or delete. The filename must be precise and comply with the naming conventions and restrictions of the FTP server’s underlying file system to avoid errors. Case sensitivity depends on the server’s operating system, and special attention should be given to character encoding, especially when handling non-ASCII or international characters. The filename should not include any directory path separators, as these are managed separately in directory or path properties.\n\n**Field behavior**\n- Identifies the specific file within the FTP server directory for various file operations.\n- Must include the file extension to ensure precise identification.\n- Case sensitivity is determined by the FTP server’s operating system.\n- Excludes directory or path information; only the filename itself is specified.\n- Used in conjunction with directory or path properties to locate the full file path.\n\n**Implementation guidance**\n- Validate that the filename contains only characters allowed by the FTP server’s file system.\n- Ensure the filename is a non-empty, well-formed string without path separators.\n- Normalize case if required to match server-specific case sensitivity rules.\n- Properly handle encoding for special or international characters to prevent transfer errors.\n- Combine with directory or path properties to construct the complete file location.\n- Avoid embedding directory paths or separators within the filename property.\n\n**Default behavior**\n- When the user does not specify a file naming convention, default to timestamped filenames using {{timestamp}} (e.g., \"items-{{timestamp}}.csv\"). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Examples**\n- \"report.pdf\"\n- \"image_2024.png\"\n- \"backup_2023_12_31.zip\"\n- \"data.csv\"\n- \"items-{{timestamp}}.csv\"\n\n**Important notes**\n- The filename alone may not be sufficient to locate the file without the corresponding directory or path.\n- Different FTP servers and operating systems may have varying rules regarding case sensitivity.\n- Path separators such as \"/\" or \"\\\" must not be included in the filename.\n- Adherence to FTP server naming conventions and restrictions is essential to prevent errors.\n- Be mindful of potential filename length limits imposed by the FTP server or protocol.\n\n**Dependency chain**\n- Depends on FTP directory or path properties to specify the full file location.\n- Utilized alongside FTP server credentials and connection settings.\n- May be influenced by file transfer mode (binary or ASCII) based on the file type.\n\n**Technical details**\n- Data type: string\n- Must conform to the FTP server’s"},"inProgressFileName":{"type":"string","description":"inProgressFileName is the designated temporary filename assigned to a file during the FTP upload process to clearly indicate that the file transfer is currently underway and not yet complete. This naming convention serves as a safeguard to prevent other systems or processes from prematurely accessing, processing, or locking the file before the upload finishes successfully. Typically, once the upload is finalized, the file is renamed to its intended permanent filename, signaling that it is ready for further use or processing. Utilizing an in-progress filename helps maintain data integrity, avoid conflicts, and streamline workflow automation in environments where files are continuously transferred and monitored. It plays a critical role in managing file state visibility, ensuring that incomplete or partially transferred files are not mistakenly processed, which could lead to data corruption or inconsistent system behavior.\n\n**Field behavior**\n- Acts as a temporary marker for files actively being uploaded via FTP or similar protocols.\n- Prevents other processes, systems, or automated workflows from accessing or processing the file until the upload is fully complete.\n- The file is renamed atomically from the inProgressFileName to the final intended filename upon successful upload completion.\n- Helps manage concurrency, avoid file corruption, partial reads, or race conditions during file transfer.\n- Indicates the current state of the file within the transfer lifecycle, providing clear visibility into upload progress.\n- Supports cleanup mechanisms by identifying orphaned or stale temporary files resulting from interrupted uploads.\n\n**Implementation guidance**\n- Choose a unique, consistent, and easily identifiable naming pattern distinct from final filenames to clearly denote the in-progress status.\n- Commonly use suffixes or prefixes such as \".inprogress\", \".tmp\", \"_uploading\", or similar conventions that are recognized across systems.\n- Ensure the FTP server and client support atomic renaming operations to avoid partial file visibility or race conditions.\n- Verify that the chosen temporary filename does not collide with existing files in the target directory to prevent overwriting or confusion.\n- Maintain consistent naming conventions across all related systems, scripts, and automation tools for clarity and maintainability.\n- Implement robust cleanup routines to detect and remove orphaned or stale in-progress files from failed or abandoned uploads.\n- Coordinate synchronization between upload completion and renaming to prevent premature processing or file locking issues.\n\n**Examples**\n- \"datafile.csv.inprogress\"\n- \"upload.tmp\"\n- \"report_20240615.tmp\"\n- \"file_upload_inprogress\"\n- \"transaction_12345.uploading\"\n\n**Important notes**\n- This filename is strictly temporary and should never be"},"backupDirectoryPath":{"type":"string","description":"backupDirectoryPath specifies the file system path to the directory where backup files are stored during FTP operations. This directory serves as the designated location for saving copies of files before they are overwritten or deleted, ensuring that original data can be recovered if necessary. The path can be either absolute or relative, depending on the system context, and must conform to the operating system’s file path conventions. It is essential that this path is valid, accessible, and writable by the FTP service or application performing the backups to guarantee reliable backup creation and data integrity. Proper configuration of this path is critical to prevent data loss and to maintain a secure and organized backup environment.  \n**Field behavior:**  \n- Specifies the target directory for storing backup copies of files involved in FTP transactions.  \n- Activated only when backup functionality is enabled within the FTP process.  \n- Requires the path to be valid, accessible, and writable to ensure successful backup creation.  \n- Supports both absolute and relative paths, with relative paths resolved against a defined base directory.  \n- If unset or empty, backup operations may be disabled or fallback to a default directory if configured.  \n**Implementation guidance:**  \n- Validate the existence of the directory and verify write permissions before initiating backups.  \n- Normalize paths to handle trailing slashes, redundant separators, and platform-specific path formats.  \n- Provide clear and actionable error messages if the directory is invalid, inaccessible, or lacks necessary permissions.  \n- Support environment variables or placeholders for dynamic path resolution where applicable.  \n- Enforce security best practices by restricting access to the backup directory to authorized users or processes only.  \n- Ensure sufficient disk space is available in the backup directory to accommodate backup files without interruption.  \n- Implement concurrency controls such as file locking or synchronization if multiple backup operations may occur simultaneously.  \n**Examples:**  \n- \"/var/ftp/backups\"  \n- \"C:\\\\ftp\\\\backup\"  \n- \"./backup_files\"  \n- \"/home/user/ftp_backup\"  \n**Important notes:**  \n- The backup directory must have adequate storage capacity to prevent backup failures due to insufficient space.  \n- Permissions must allow the FTP process or service to create and modify files within the directory.  \n- Backup files may contain sensitive or confidential data; appropriate security measures should be enforced to protect them.  \n- Path formatting should align with the conventions of the underlying operating system to avoid errors.  \n- Failure to properly configure this path can result in loss of backup data or inability to recover"}}},"Jdbc":{"type":"object","description":"Configuration for JDBC import operations. Defines how data is written to a database\nvia a JDBC connection.\n\n**Query type determines which fields are required**\n\n| queryType        | Required fields              | Do NOT set        |\n|------------------|------------------------------|-------------------|\n| [\"per_record\"]   | query (array of SQL strings) | bulkInsert        |\n| [\"per_page\"]     | query (array of SQL strings) | bulkInsert        |\n| [\"bulk_insert\"]  | bulkInsert object            | query             |\n| [\"bulk_load\"]    | bulkLoad object              | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]\nWrong: \"INSERT INTO users ...\"\nWrong: [{\"query\": \"INSERT INTO users ...\"}]\n","properties":{"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"] or [\"per_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars {{fieldName}} syntax to inject values from incoming records.\n\n**Format — array of strings**\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n\n**Examples**\n- INSERT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{qty}} WHERE sku = '{{sku}}'\"]\n- MERGE/UPSERT: [\"MERGE INTO target USING (SELECT CAST(? AS VARCHAR) AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = ? WHEN NOT MATCHED THEN INSERT (name, email) VALUES (?, ?)\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{name}}'\n- Numbers: no quotes — {{quantity}}\n"},"queryType":{"type":"array","items":{"type":"string","enum":["bulk_insert","per_record","per_page","bulk_load"]},"description":"Execution strategy for the SQL operation. REQUIRED. Must be an array with one value.\n\n**Decision tree**\n\n1. If UPDATE or UPSERT/MERGE → [\"per_record\"] (set query field)\n2. If INSERT with \"ignore existing\" / \"skip duplicates\" / match logic → [\"per_record\"] (set query field)\n3. If pure INSERT with no duplicate checking → [\"bulk_insert\"] (set bulkInsert object)\n4. If high-volume bulk load → [\"bulk_load\"] (set bulkLoad object)\n\n**Critical relationship to other fields**\n| queryType        | REQUIRES              | DO NOT SET   |\n|------------------|-----------------------|--------------|\n| [\"per_record\"]   | query (array)         | bulkInsert   |\n| [\"per_page\"]     | query (array)         | bulkInsert   |\n| [\"bulk_insert\"]  | bulkInsert object     | query        |\n| [\"bulk_load\"]    | bulkLoad object       | query        |\n\n**Examples**\n- Per-record upsert: [\"per_record\"]\n- Bulk insert: [\"bulk_insert\"]\n- Bulk load: [\"bulk_load\"]\n"},"bulkInsert":{"type":"object","description":"Bulk insert configuration. REQUIRED when queryType is [\"bulk_insert\"]. DO NOT SET when queryType is [\"per_record\"].\n\nEnables efficient batch insertion of records into a database table without per-record SQL.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk insert. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"batchSize":{"type":"string","description":"Number of records per batch during bulk insert.\nLarger values improve throughput but use more memory.\nCommon values: \"1000\", \"5000\".\n"}}},"bulkLoad":{"type":"object","description":"Bulk load configuration. REQUIRED when queryType is [\"bulk_load\"]. Uses database-native bulk loading for maximum throughput.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk load. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"primaryKeys":{"type":["array","null"],"items":{"type":"string"},"description":"Primary key column names for upsert/merge during bulk load.\nWhen set, existing rows matching these keys are updated; non-matching rows are inserted.\nExample: [\"id\"] or [\"order_id\", \"product_id\"] for composite keys.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Lookup name, referenced in field mappings or ignore logic."},"query":{"type":"string","description":"SQL query for the lookup (e.g., \"SELECT id FROM users WHERE email = '{{email}}'\")."},"extract":{"type":"string","description":"Path in the lookup result to use as the value (e.g., \"id\")."},"_lookupCacheId":{"type":"string","format":"objectId","description":"Reference to a cached lookup for performance optimization."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup queries executed against the JDBC connection to resolve reference values.\nEach lookup runs a SQL query and extracts a value to use in field mappings.\n"}}},"Mongodb":{"type":"object","description":"Configuration for MongoDB imports. This schema defines ONLY the MongoDB-specific configuration properties.\n\n**IMPORTANT:** Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level. When generating this configuration, return ONLY the properties defined here (method, collection, ignoreExtract, etc.).","properties":{"method":{"type":"string","enum":["insertMany","updateOne"],"description":"The HTTP method to be used for the MongoDB API request, specifying the type of operation to perform on the database resource. This property dictates how the server interprets and processes the request, directly influencing the action taken on MongoDB collections or documents. Common HTTP methods include GET for retrieving data, POST for creating new entries, PUT or PATCH for updating existing documents, and DELETE for removing records. Each method determines the structure of the request payload, the expected response format, and the side effects on the database. Proper selection of the method ensures alignment with RESTful principles, maintains idempotency where applicable, and supports secure and predictable interactions with the MongoDB service.\n\n**Field behavior**\n- Defines the intended CRUD operation (Create, Read, Update, Delete) on MongoDB resources.\n- Controls server-side processing logic, affecting data retrieval, insertion, modification, or deletion.\n- Influences the presence, format, and content of request bodies and the nature of responses returned.\n- Determines idempotency and safety characteristics of the API call, impacting retry and error handling strategies.\n- Affects authorization and authentication requirements based on the sensitivity of the operation.\n- Impacts caching behavior and network efficiency depending on the method used.\n\n**Implementation guidance**\n- Use standard HTTP methods consistent with RESTful API design: GET for reading data, POST for creating new documents, PUT or PATCH for updating existing documents, and DELETE for removing documents.\n- Validate the method against a predefined set of allowed HTTP methods to ensure API consistency and prevent unsupported operations.\n- Ensure the selected method accurately reflects the intended database operation to avoid unintended data modifications or errors.\n- Consider the idempotency of each method when designing client-side logic, especially for update and delete operations.\n- Include appropriate headers, query parameters, and request body content as required by the specific method and MongoDB operation.\n- Implement robust error handling and validation to manage unsupported or incorrect method usage gracefully.\n- Document method-specific requirements and constraints clearly for API consumers.\n\n**Examples**\n- GET: Retrieve one or more documents from a MongoDB collection without modifying data.\n- POST: Insert a new document into a collection, creating a new resource.\n- PUT: Replace an entire existing document with a new version, ensuring full update.\n- PATCH: Apply partial updates to specific fields within an existing document.\n- DELETE: Remove a document or multiple documents from a collection permanently.\n\n**Important notes**\n- The HTTP method must be compatible with the MongoDB operation"},"collection":{"type":"string","description":"Specifies the exact name of the MongoDB collection where data operations—including insertion, querying, updating, or deletion—will be executed. This property defines the precise target collection within the database, establishing the scope and context of the data affected by the API call. The collection name must strictly adhere to MongoDB's naming conventions to ensure compatibility, prevent conflicts with system-reserved collections, and maintain database integrity. Proper naming facilitates efficient data organization, indexing, query performance, and overall database scalability.\n\n**Field behavior**\n- Identifies the specific MongoDB collection for all CRUD (Create, Read, Update, Delete) operations.\n- Directly determines which subset of data is accessed, modified, or deleted during the API request.\n- Must be a valid, non-empty UTF-8 string that complies with MongoDB collection naming rules.\n- Case-sensitive, meaning collections named \"Users\" and \"users\" are distinct and separate.\n- Influences the application’s data flow and operational context by specifying the data container.\n- Changes to this property dynamically alter the target dataset for the API operation.\n\n**Implementation guidance**\n- Validate that the collection name is a non-empty UTF-8 string without null characters, spaces, or invalid symbols such as '$' or '.' except where allowed.\n- Ensure the name does not start with the reserved prefix \"system.\" to avoid conflicts with MongoDB internal collections.\n- Adopt consistent, descriptive, and meaningful naming conventions to enhance database organization, maintainability, and clarity.\n- Verify the existence of the collection before performing operations; implement robust error handling for cases where the collection does not exist.\n- Consider how the collection name impacts indexing strategies, query optimization, sharding, and overall database performance.\n- Maintain consistency in collection naming across the application to prevent unexpected behavior or data access issues.\n- Avoid overly long or complex names to reduce potential errors and improve readability.\n\n**Examples**\n- \"users\"\n- \"orders\"\n- \"inventory_items\"\n- \"logs_2024\"\n- \"customer_feedback\"\n- \"session_data\"\n- \"product_catalog\"\n\n**Important notes**\n- Collection names are case-sensitive and must be used consistently throughout the application to avoid data inconsistencies.\n- Avoid reserved names and prefixes such as \"system.\" to prevent interference with MongoDB’s internal collections and operations.\n- The choice of collection name can significantly affect database performance, indexing efficiency, and scalability.\n- Renaming a collection requires updating all references in the application and may necessitate data"},"filter":{"type":"string","description":"A MongoDB query filter object used to specify detailed criteria for selecting documents within a collection. This filter defines the exact conditions that documents must satisfy to be included in the query results, leveraging MongoDB's comprehensive and expressive query syntax. It supports a wide range of operators and expressions to enable precise and flexible querying of documents based on field values, existence, data types, ranges, nested properties, array contents, and more. The filter allows for complex logical combinations and can target deeply nested fields using dot notation, making it a powerful tool for fine-grained data retrieval.\n\n**Field behavior**\n- Determines which documents in the MongoDB collection are matched and returned by the query.\n- Supports complex query expressions including comparison operators (`$eq`, `$gt`, `$lt`, `$gte`, `$lte`, `$ne`), logical operators (`$and`, `$or`, `$not`, `$nor`), element operators (`$exists`, `$type`), array operators (`$in`, `$nin`, `$all`, `$elemMatch`), and evaluation operators (`$regex`, `$expr`).\n- Enables filtering based on simple fields, nested document properties using dot notation, and array contents.\n- Supports querying for null values, missing fields, and specific data types.\n- Allows combining multiple conditions using logical operators to form complex queries.\n- If omitted or provided as an empty object, the query defaults to returning all documents in the collection without filtering.\n- Can be used to optimize data retrieval by narrowing down the result set to only relevant documents.\n\n**Implementation guidance**\n- Construct the filter using valid MongoDB query operators and syntax to ensure correct behavior.\n- Validate and sanitize all input used to build the filter to prevent injection attacks or malformed queries.\n- Utilize dot notation to query nested fields within embedded documents or arrays.\n- Optimize query performance by aligning filters with indexed fields in the MongoDB collection.\n- Handle edge cases such as null values, missing fields, and data type variations appropriately within the filter.\n- Test filters thoroughly to ensure they return expected results and handle all relevant data scenarios.\n- Consider the impact of filter complexity on query execution time and resource usage.\n- Use projection and indexing strategies in conjunction with filters to improve query efficiency.\n- Be mindful of MongoDB version compatibility when using advanced operators or expressions.\n\n**Examples**\n- `{ \"status\": \"active\" }` — selects documents where the `status` field exactly matches \"active\".\n- `{ \"age\": { \"$gte\":"},"document":{"type":"string","description":"The main content or data structure representing a single record in a MongoDB database, encapsulated as a JSON-like object known as a document. This document consists of key-value pairs where keys are field names and values can be of various data types, including nested documents, arrays, strings, numbers, booleans, dates, and binary data, allowing for flexible and hierarchical data representation. It serves as the fundamental unit of data storage and retrieval within MongoDB collections and typically includes a unique identifier field (_id) that ensures each document can be distinctly accessed. Documents can contain metadata, support complex nested structures, and vary in schema within the same collection, reflecting MongoDB's schema-less design. This flexibility enables dynamic and evolving data models suited for diverse application needs, supporting efficient querying, indexing, and atomic operations at the document level.\n\n**Field behavior**\n- Represents an individual MongoDB document stored within a collection.\n- Contains key-value pairs with field names as keys and diverse data types as values, including nested documents and arrays.\n- Acts as the primary unit for data storage, retrieval, and manipulation in MongoDB.\n- Includes a mandatory unique identifier field (_id) unless auto-generated by MongoDB.\n- Supports flexible schema, allowing documents within the same collection to have different structures.\n- Can include metadata fields and support indexing for optimized queries.\n- Allows for embedding related data directly within the document to reduce the need for joins.\n- Supports atomic operations at the document level, ensuring consistency during updates.\n\n**Implementation guidance**\n- Ensure compliance with MongoDB BSON format constraints for data types and structure.\n- Validate field names to exclude reserved characters such as '.' and '$' unless explicitly permitted.\n- Support serialization and deserialization processes between application-level objects and MongoDB documents.\n- Enable handling of nested documents and arrays to represent complex data models effectively.\n- Consider indexing frequently queried fields within the document to enhance performance.\n- Monitor document size to stay within MongoDB’s 16MB limit to avoid performance degradation.\n- Use appropriate data types to optimize storage and query efficiency.\n- Implement validation rules or schema enforcement at the application or database level if needed to maintain data integrity.\n\n**Examples**\n- { \"_id\": ObjectId(\"507f1f77bcf86cd799439011\"), \"name\": \"Alice\", \"age\": 30, \"address\": { \"street\": \"123 Main St\", \"city\": \"Metropolis\" } }\n- { \"productId\": \""},"update":{"type":"string","description":"Update operation details specifying the precise modifications to be applied to one or more documents within a MongoDB collection. This field defines the exact changes using MongoDB's rich set of update operators, enabling atomic and efficient modifications to fields, arrays, or embedded documents without replacing entire documents unless explicitly intended. It supports a comprehensive range of update operations such as setting new values, incrementing numeric fields, removing fields, appending or removing elements in arrays, renaming fields, and more, providing fine-grained control over document mutations. Additionally, it accommodates advanced update mechanisms including pipeline-style updates introduced in MongoDB 4.2+, allowing for complex transformations and conditional updates within a single operation. The update specification can also leverage array filters and positional operators to target specific elements within arrays, enhancing the precision and flexibility of updates.\n\n**Field behavior**\n- Specifies the criteria and detailed modifications to apply to matching documents in a collection.\n- Supports a wide array of MongoDB update operators like `$set`, `$unset`, `$inc`, `$push`, `$pull`, `$addToSet`, `$rename`, `$mul`, `$min`, `$max`, and others.\n- Can target single or multiple documents depending on the operation context and options such as `multi` or `upsert`.\n- Ensures atomicity of update operations to maintain data integrity and consistency across concurrent operations.\n- Allows both partial updates using operators and full document replacements when the update object is a complete document without operators.\n- Supports pipeline-style updates for complex, multi-stage transformations and conditional logic within updates.\n- Enables the use of array filters and positional operators to selectively update elements within arrays based on specified conditions.\n- Handles upsert behavior to insert new documents if no existing documents match the update criteria, when enabled.\n\n**Implementation guidance**\n- Validate the update object rigorously to ensure compliance with MongoDB update syntax and semantics, preventing runtime errors.\n- Favor using update operators for partial updates to optimize performance and minimize unintended data overwrites.\n- Implement graceful handling for cases where no documents match the update criteria, optionally supporting upsert behavior to insert new documents if none exist.\n- Explicitly control whether updates affect single or multiple documents to avoid accidental mass modifications.\n- Incorporate robust error handling for invalid update operations, schema conflicts, or violations of database constraints.\n- Consider the impact of update operations on indexes, triggers, and other database mechanisms to maintain overall system performance and data integrity.\n- Support advanced update features such as array filters, positional operators"},"upsert":{"type":"boolean","description":"Upsert is a boolean flag that controls whether a MongoDB update operation should insert a new document if no existing document matches the specified query criteria, or strictly update existing documents only. When set to true, the operation performs an atomic \"update or insert\" action: it updates the matching document if found, or inserts a new document constructed from the update criteria if none exists. This ensures that after the operation, a document matching the criteria will exist in the collection. When set to false, the operation only updates documents that already exist and skips the operation entirely if no match is found, preventing any new documents from being created. This flag is essential for scenarios where maintaining data integrity and avoiding unintended document creation is critical.\n\n**Field behavior**\n- Determines whether the update operation can create a new document when no existing document matches the query.\n- If true, performs an atomic operation that either updates an existing document or inserts a new one.\n- If false, restricts the operation to updating existing documents only, with no insertion.\n- Influences the database state by enabling conditional document creation during update operations.\n- Affects the outcome of update queries by potentially expanding the dataset with new documents.\n\n**Implementation guidance**\n- Use upsert=true when you need to guarantee the existence of a document after the operation, regardless of prior presence.\n- Use upsert=false to restrict changes strictly to existing documents, avoiding unintended document creation.\n- Ensure that when upsert is true, the update document includes all required fields to create a valid new document.\n- Validate that the query filter and update document are logically consistent to prevent unexpected or malformed insertions.\n- Consider the impact on database size, indexing, and performance when enabling upsert, as it may increase document count.\n- Test the behavior in your specific MongoDB driver and version to confirm upsert semantics and compatibility.\n- Be mindful of potential race conditions in concurrent environments that could lead to duplicate documents if not handled properly.\n\n**Examples**\n- `upsert: true` — Update the matching document if found; otherwise, insert a new document based on the update criteria.\n- `upsert: false` — Update only if a matching document exists; skip the operation if no match is found.\n- Using upsert to create a user profile document if it does not exist during an update operation.\n- Employing upsert in configuration management to ensure default settings documents are present and updated as needed.\n\n**Important notes**\n- Upsert operations can increase"},"ignoreExtract":{"type":"string","description":"Enter the path to the field in the source record that should be used to identify existing records. If a value is found for this field, then the source record will be considered an existing record.\n\nThe dynamic field list makes it easy for you to select the field. If a field contains special characters (which may be the case for certain APIs), then the field is enclosed with square brackets [ ], for example, [field-name].\n\n[*] indicates that the specified field is an array, for example, items[*].id. In this case, you should replace * with a number corresponding to the array item. The value for only this array item (not, the entire array) is checked.\n**CRITICAL:** JSON path to the field in the incoming/source record that should be used to determine if the record already exists in the target system.\n\n**This field is ONLY used when `ignoreExisting` is set to true.** Note: `ignoreExisting` is a separate property that is NOT part of this mongodb configuration.\n\nWhen `ignoreExisting: true` is set (at the import level), this field specifies which field from the incoming data should be checked for a value. If that field has a value, the record is considered to already exist and will be skipped.\n\n**When to set this field**\n\n**ALWAYS set this field** when:\n1. The `ignoreExisting` property will be set to `true` (for \"ignore existing\" scenarios)\n2. The user's prompt mentions checking a specific field to identify existing records\n3. The prompt includes phrases like:\n   - \"indicated by the [field] field\"\n   - \"matching the [field] field\"\n   - \"based on [field]\"\n   - \"using the [field] field\"\n   - \"if [field] is populated\"\n   - \"where [field] exists\"\n\n**How to determine the value**\n\n1. **Look for the identifier field in the prompt:**\n   - \"ignoring existing customers indicated by the **id field**\" → `ignoreExtract: \"id\"`\n   - \"skip existing vendors based on the **email field**\" → `ignoreExtract: \"email\"`\n   - \"ignore records where **customerId** is populated\" → `ignoreExtract: \"customerId\"`\n\n2. **Use the field from the incoming/source data** (NOT the target system field):\n   - This is the field path in the data coming INTO the import\n   - Should match a field that will be present in the source records\n\n3. **Apply special character handling:**\n   - If a field contains special characters (hyphens, spaces, etc.), enclose with square brackets: `[field-name]`, `[customer id]`\n   - For array notation, use `[*]` to indicate array items: `items[*].id` (replace `*` with index if needed)\n\n**Field path syntax**\n\n- **Simple field:** `id`, `email`, `customerId`\n- **Nested field:** `customer.id`, `billing.email`\n- **Field with special chars:** `[customer-id]`, `[external_id]`, `[Customer Name]`\n- **Array field:** `items[*].id` (or `items[0].id` for specific index)\n- **Nested in array:** `addresses[*].postalCode`\n\n**Examples**\n\nThese examples show ONLY the mongodb configuration (not the full import structure):\n\n**Example 1: Simple field**\nPrompt: \"Create customers while ignoring existing customers indicated by the id field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"customers\",\n  \"ignoreExtract\": \"id\"\n}\n```\n(Note: `ignoreExisting: true` should also be set, but at a different level - not shown here)\n\n**Example 2: Field with special characters**\nPrompt: \"Import vendors, skip existing ones based on the vendor-code field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"vendors\",\n  \"ignoreExtract\": \"[vendor-code]\"\n}\n```\n\n**Example 3: Nested field**\nPrompt: \"Add accounts, ignore existing where customer.email matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"accounts\",\n  \"ignoreExtract\": \"customer.email\"\n}\n```\n\n**Example 4: No ignoreExisting scenario**\nPrompt: \"Create or update all customer records\"\n```json\n{\n  \"method\": \"updateOne\",\n  \"collection\": \"customers\"\n}\n```\n(ignoreExtract should NOT be set when ignoreExisting is not true)\n\n**Important notes**\n\n- **DO NOT SET** this field if `ignoreExisting` is not true\n- **`ignoreExisting` is NOT part of this mongodb schema** - it belongs at a different level\n- This specifies the SOURCE field, not the target MongoDB field\n- The field path is case-sensitive\n- Must be a valid field that exists in the incoming data\n- Works in conjunction with `ignoreExisting` - both must be set for this feature to work\n- If the specified field has a value in the incoming record, that record is considered existing and will be skipped\n\n**Common patterns**\n\n- \"ignore existing [records] indicated by the **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"skip existing [records] based on **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"if **[field]** is populated\" → Set `ignoreExtract: \"[field]\"`\n- No mention of specific field for identification → May need to use default like \"id\" or \"_id\""},"ignoreLookupFilter":{"type":"string","description":"If you are adding documents to your MongoDB instance and you have the Ignore Existing flag set to true please enter a filter object here to find existing documents in this collection. The value of this field must be a valid JSON string describing a MongoDB filter object in the correct format and with the correct operators. Refer to the MongoDB documentation for the list of valid query operators and the correct filter object syntax.\n**CRITICAL:** A JSON-stringified MongoDB query filter used to search for existing documents in the target collection.\n\nIt defines the criteria to match incoming records against existing MongoDB documents. If the query returns a result, the record is considered \"existing\" and the operation (usually insert) is skipped.\n\n**Critical distinction vs `ignoreExtract`**\n- Use **`ignoreExtract`** when you just want to check if a field exists on the **INCOMING** record (no DB query).\n- Use **`ignoreLookupFilter`** when you need to query the **MONGODB DATABASE** to see if a record exists (e.g., \"check if email exists in DB\").\n\n**When to set this field**\nSet this field when:\n1. Top-level `ignoreExisting` is `true`.\n2. The prompt implies checking the database for duplicates (e.g., \"skip if email already exists\", \"prevent duplicate SKUs\", \"match on email\").\n\n**Format requirements**\n- Must be a valid **JSON string** representing a MongoDB query.\n- Use Handlebars `{{FieldName}}` to reference values from the incoming record.\n- Format: `\"{ \\\"mongoField\\\": \\\"{{incomingField}}\\\" }\"`\n\n**Examples**\n\n**Example 1: Match on a single field**\nPrompt: \"Insert users, skip if email already exists in the database\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"users\",\n  \"ignoreLookupFilter\": \"{\\\"email\\\":\\\"{{email}}\\\"}\"\n}\n```\n\n**Example 2: Match on multiple fields**\nPrompt: \"Add products, ignore if SKU and VerifyID match existing products\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"products\",\n  \"ignoreLookupFilter\": \"{\\\"sku\\\":\\\"{{sku}}\\\", \\\"verify_id\\\":\\\"{{verify_id}}\\\"}\"\n}\n```\n\n**Example 3: Using operators**\nPrompt: \"Import orders, ignore if order_id matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"orders\",\n  \"ignoreLookupFilter\": \"{\\\"order_id\\\": { \\\"$eq\\\": \\\"{{id}}\\\" } }\"\n}\n```\n\n**Important notes**\n- The value MUST be a string (stringify the JSON).\n- Mongo field names (keys) must match the **collection schema**.\n- Handlebars variables (values) must match the **incoming data**."}}},"NetSuite":{"type":"object","description":"Configuration for NetSuite exports","properties":{"lookups":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the category or classification of the lookup being referenced. It defines the nature or kind of data that the lookup represents, enabling the system to handle it appropriately based on its type.\n\n**Field behavior**\n- Determines the category or classification of the lookup.\n- Influences how the lookup data is processed and validated.\n- Helps in filtering or grouping lookups by their type.\n- Typically represented as a string or enumerated value indicating the lookup category.\n\n**Implementation guidance**\n- Define a clear set of allowed values or an enumeration for the type to ensure consistency.\n- Validate the type value against the predefined set during data input or API requests.\n- Use the type property to drive conditional logic or UI rendering related to the lookup.\n- Document all possible type values and their meanings for API consumers.\n\n**Examples**\n- \"country\" — representing a lookup of countries.\n- \"currency\" — representing a lookup of currency codes.\n- \"status\" — representing a lookup of status codes or states.\n- \"category\" — representing a lookup of item categories.\n\n**Important notes**\n- The type value should be consistent across the system to avoid ambiguity.\n- Changing the type of an existing lookup may affect dependent systems or data integrity.\n- The type property is essential for distinguishing between different lookup datasets.\n\n**Dependency chain**\n- May depend on or be referenced by other properties that require lookup data.\n- Used in conjunction with lookup values or keys to provide meaningful data.\n- Can influence validation rules or business logic applied to the lookup.\n\n**Technical details**\n- Typically implemented as a string or an enumerated type in the API schema.\n- Should be indexed or optimized for quick filtering and retrieval.\n- May be case-sensitive or case-insensitive depending on system design.\n- Should be included in API documentation with all valid values listed."},"searchField":{"type":"string","description":"searchField specifies the particular field or attribute within a dataset or index that the search operation should target. This property determines which field the search query will be applied to, enabling focused and efficient retrieval of relevant information.\n\n**Field behavior**\n- Defines the specific field in the data source to be searched.\n- Limits the search scope to the designated field, improving search precision.\n- Can accept field names as strings, corresponding to indexed or searchable attributes.\n- If omitted or null, the search may default to a predefined field or perform a broad search across multiple fields.\n\n**Implementation guidance**\n- Ensure the field name provided matches exactly with the field names defined in the data schema or index.\n- Validate that the field is searchable and supports the type of queries intended.\n- Consider supporting nested or compound field names if the data structure is hierarchical.\n- Provide clear error handling for invalid or unsupported field names.\n- Allow for case sensitivity or insensitivity based on the underlying search engine capabilities.\n\n**Examples**\n- \"title\" — to search within the title field of documents.\n- \"author.name\" — to search within a nested author name field.\n- \"description\" — to search within the description or summary field.\n- \"tags\" — to search within a list or array of tags associated with items.\n\n**Important notes**\n- The effectiveness of the search depends on the indexing and search capabilities of the specified field.\n- Using an incorrect or non-existent field name may result in no search results or errors.\n- Some fields may require special handling, such as tokenization or normalization, to support effective searching.\n- The property is case-sensitive if the underlying system treats field names as such.\n\n**Dependency chain**\n- Dependent on the data schema or index configuration defining searchable fields.\n- May interact with other search parameters such as search query, filters, or sorting.\n- Influences the search engine or database query construction.\n\n**Technical details**\n- Typically represented as a string matching the field identifier in the data source.\n- May support dot notation for nested fields (e.g., \"user.address.city\").\n- Should be sanitized to prevent injection attacks or malformed queries.\n- Used by the search backend to construct field-specific query clauses."},"expression":{"type":"string","description":"Expression used to define the criteria or logic for the lookup operation. This expression typically consists of a string that can include variables, operators, functions, or references to other data elements, enabling dynamic and flexible lookup conditions.\n\n**Field behavior**\n- Specifies the condition or formula that determines how the lookup is performed.\n- Can include variables, constants, operators, and functions to build complex expressions.\n- Evaluated at runtime to filter or select data based on the defined logic.\n- Supports dynamic referencing of other fields or parameters within the lookup context.\n\n**Implementation guidance**\n- Ensure the expression syntax is consistent with the supported expression language or parser.\n- Validate expressions before execution to prevent runtime errors.\n- Support common operators (e.g., ==, !=, >, <, AND, OR) and functions as per the system capabilities.\n- Allow expressions to reference other properties or variables within the lookup scope.\n- Provide clear error messages if the expression is invalid or cannot be evaluated.\n\n**Examples**\n- `\"status == 'active' AND score > 75\"`\n- `\"userRole == 'admin' OR accessLevel >= 5\"`\n- `\"contains(tags, 'urgent')\"`\n- `\"date >= '2024-01-01' AND date <= '2024-12-31'\"`\n\n**Important notes**\n- The expression must be a valid string conforming to the expected syntax.\n- Incorrect or malformed expressions may cause lookup failures or unexpected results.\n- Expressions should be designed to optimize performance, avoiding overly complex or resource-intensive logic.\n- The evaluation context (available variables and functions) should be clearly documented for users.\n\n**Dependency chain**\n- Depends on the lookup context providing necessary variables or data references.\n- May depend on the expression evaluation engine or parser integrated into the system.\n- Influences the outcome of the lookup operation and subsequent data processing.\n\n**Technical details**\n- Typically represented as a string data type.\n- Parsed and evaluated by an expression engine or interpreter at runtime.\n- May support standard expression languages such as SQL-like syntax, JSONPath, or custom domain-specific languages.\n- Should handle escaping and quoting of string literals within the expression."},"resultField":{"type":"string","description":"resultField specifies the name of the field in the lookup result that contains the desired value to be retrieved or used.\n\n**Field behavior**\n- Identifies the specific field within the lookup result data to extract.\n- Determines which piece of information from the lookup response is returned or processed.\n- Must correspond to a valid field name present in the lookup result structure.\n- Used to map or transform lookup data into the expected output format.\n\n**Implementation guidance**\n- Ensure the field name matches exactly with the field in the lookup result, including case sensitivity if applicable.\n- Validate that the specified field exists in all possible lookup responses to avoid runtime errors.\n- Use this property to customize which data from the lookup is utilized downstream.\n- Consider supporting nested field paths if the lookup result is a complex object.\n\n**Examples**\n- \"email\" — to extract the email address field from the lookup result.\n- \"userId\" — to retrieve the user identifier from the lookup data.\n- \"address.city\" — to access a nested city field within an address object in the lookup result.\n\n**Important notes**\n- Incorrect or misspelled field names will result in missing or null values.\n- The field must be present in the lookup response schema; otherwise, the lookup will not yield useful data.\n- This property does not perform any transformation; it only selects the field to be used.\n\n**Dependency chain**\n- Depends on the structure and schema of the lookup result data.\n- Used in conjunction with the lookup operation that fetches the data.\n- May influence downstream processing or mapping logic that consumes the lookup output.\n\n**Technical details**\n- Typically a string representing the key or path to the desired field in the lookup result object.\n- May support dot notation for nested fields if supported by the implementation.\n- Should be validated against the lookup result schema during configuration or runtime."},"includeInactive":{"type":"boolean","description":"IncludeInactive: Specifies whether to include inactive items in the lookup results.\n**Field behavior**\n- When set to true, the lookup results will contain both active and inactive items.\n- When set to false or omitted, only active items will be included in the results.\n- Affects the filtering logic applied during data retrieval for lookups.\n**Implementation guidance**\n- Default the value to false if not explicitly provided to avoid unintended inclusion of inactive items.\n- Ensure that the data source supports filtering by active/inactive status.\n- Validate the input to accept only boolean values.\n- Consider performance implications when including inactive items, especially if the dataset is large.\n**Examples**\n- includeInactive: true — returns all items regardless of their active status.\n- includeInactive: false — returns only items marked as active.\n**Important notes**\n- Including inactive items may expose deprecated or obsolete data; use with caution.\n- The definition of \"inactive\" depends on the underlying data model and should be clearly documented.\n- This property is typically used in administrative or audit contexts where full data visibility is required.\n**Dependency chain**\n- Dependent on the data source’s ability to distinguish active vs. inactive items.\n- May interact with other filtering or pagination parameters in the lookup API.\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or request body field in lookup API calls.\n- Requires backend support to filter data based on active status flags or timestamps."},"_id":{"type":"object","description":"_id: The unique identifier for the lookup entry within the system. This identifier is typically a string or an ObjectId that uniquely distinguishes each lookup record from others in the database or dataset.\n**Field behavior**\n- Serves as the primary key for the lookup entry.\n- Must be unique across all lookup entries.\n- Immutable once assigned; should not be changed to maintain data integrity.\n- Used to reference the lookup entry in queries, updates, and deletions.\n**Implementation guidance**\n- Use a consistent format for the identifier, such as a UUID or database-generated ObjectId.\n- Ensure uniqueness by leveraging database constraints or application-level checks.\n- Assign the _id at the time of creation of the lookup entry.\n- Avoid exposing internal _id values directly to end-users unless necessary.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"lookup_001\"\n**Important notes**\n- The _id field is critical for data integrity and should be handled with care.\n- Changing the _id after creation can lead to broken references and data inconsistencies.\n- In some systems, the _id may be automatically generated by the database.\n**Dependency chain**\n- May be referenced by other fields or entities that link to the lookup entry.\n- Dependent on the database or storage system's method of generating unique identifiers.\n**Technical details**\n- Typically stored as a string or a specialized ObjectId type depending on the database.\n- Indexed to optimize query performance.\n- Often immutable and enforced by database schema constraints."},"default":{"type":["object","null"],"description":"A boolean property indicating whether the lookup is the default selection among multiple lookup options.\n\n**Field behavior**\n- Determines if this lookup is automatically selected or used by default in the absence of user input.\n- Only one lookup should be marked as default within a given context to avoid ambiguity.\n- Influences the initial state or value presented to the user or system when multiple lookups are available.\n\n**Implementation guidance**\n- Set to `true` for the lookup that should be the default choice; all others should be `false` or omitted.\n- Validate that only one lookup per group or context has `default` set to `true`.\n- Use this property to pre-populate fields or guide user selections in UI or processing logic.\n\n**Examples**\n- `default: true` — This lookup is the default selection.\n- `default: false` — This lookup is not the default.\n- Omitted `default` property implies the lookup is not the default.\n\n**Important notes**\n- If multiple lookups have `default` set to `true`, the system behavior may be unpredictable.\n- The absence of a `default` lookup means no automatic selection; user input or additional logic is required.\n- This property is typically optional but recommended for clarity in multi-lookup scenarios.\n\n**Dependency chain**\n- Related to the `lookups` array or collection where multiple lookup options exist.\n- May influence UI components or backend logic that rely on default selections.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default value: Typically `false` if omitted.\n- Should be included at the same level as other lookup properties within the `lookups` object."}},"description":"A comprehensive collection of lookup tables or reference data sets designed to support and enhance the main data model or application logic. These lookup tables provide predefined, authoritative sets of values that can be consistently referenced throughout the system to ensure data uniformity, minimize redundancy, and facilitate robust data validation and user interface consistency. They serve as centralized repositories for static or infrequently changing data that underpin dynamic application behavior, such as dropdown options, status codes, categorizations, and mappings. By centralizing this reference data, the system promotes maintainability, reduces errors, and enables seamless integration across different modules and services. Lookup tables may also support complex data structures, localization, versioning, and hierarchical relationships to accommodate diverse application requirements and evolving business rules.\n\n**Field behavior**\n- Contains multiple distinct lookup tables, each identified by a unique name representing a specific domain or category of reference data.\n- Each lookup table comprises structured entries, which may be simple key-value pairs or complex objects with multiple attributes, defining valid options, mappings, or metadata.\n- Lookup tables standardize inputs, support UI components like dropdowns and filters, enforce data integrity, and drive conditional logic across the application.\n- Typically static or updated infrequently, ensuring stability and consistency in dependent processes.\n- Supports localization or internationalization to provide values in multiple languages or regional formats.\n- May represent hierarchical or relational data structures within lookup tables to capture complex relationships.\n- Enables versioning to track changes over time and facilitate rollback, auditing, or backward compatibility.\n- Acts as a single source of truth for reference data, reducing duplication and discrepancies across systems.\n- Can be extended or customized to meet specific domain or organizational needs without impacting core application logic.\n\n**Implementation guidance**\n- Organize as a dictionary or map where each key corresponds to a lookup table name and the associated value contains the set of entries.\n- Implement efficient loading, caching, and retrieval mechanisms to optimize performance and minimize latency.\n- Provide controlled update or refresh capabilities to allow seamless modifications without disrupting ongoing application operations.\n- Enforce strict validation rules on lookup entries to prevent invalid, inconsistent, or duplicate data.\n- Incorporate support for localization and internationalization where applicable, enabling multi-language or region-specific values.\n- Maintain versioning or metadata to track changes and support backward compatibility.\n- Consider access control policies if lookup data includes sensitive or restricted information.\n- Design APIs or interfaces for easy querying, filtering, and retrieval of lookup data by client applications.\n- Ensure synchronization mechanisms are in place"},"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"Specifies the type of operation to perform on NetSuite records. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create a new record. Use when importing new records only.\n- \"update\" — Update an existing record. Requires internalIdLookup to find the record.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. This is the most common operation.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete a record. Requires internalIdLookup to find the record.\n\n**Implementation guidance**\n- Value is automatically lowercased by the API.\n- For \"addupdate\", \"update\", and \"delete\", you MUST also set internalIdLookup to specify how to find existing records.\n- For \"add\" with ignoreExisting, set internalIdLookup to check for duplicates before creating.\n- Default to \"addupdate\" when the user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when the user says \"create\" or \"insert\" without mentioning updates.\n\n**Examples**\n- \"addupdate\" — for syncing records (create or update)\n- \"add\" — for creating new records only\n- \"update\" — for updating existing records only\n- \"delete\" — for removing records\n\n**Important notes**\n- This field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\n- Do NOT wrap it in an object like {\"type\": \"addupdate\"} — that will cause a validation error."},"isFileProvider":{"type":"boolean","description":"isFileProvider indicates whether the NetSuite integration functions as a file provider, enabling comprehensive capabilities to access, manage, and manipulate files within the NetSuite environment. When enabled, this property grants the integration full file-related operations such as browsing directories, uploading new files, downloading existing files, updating file metadata, and deleting files directly through the integration interface. This functionality is essential for workflows that require seamless, automated interaction with NetSuite's file storage system, facilitating efficient document handling, version control, and integration with external file management processes. It also impacts the availability of specific API endpoints and user interface components related to file management, and may influence system performance and API rate limits due to the volume of file operations.\n\n**Field behavior**\n- When set to true, activates full file management features including browsing, uploading, downloading, updating, and deleting files within NetSuite.\n- When false or omitted, disables all file provider functionalities, preventing any file-related operations through the integration.\n- Acts as a toggle controlling the availability of file-centric capabilities and related API endpoints.\n- Changes typically require re-authentication or reinitialization to apply updated file access permissions correctly.\n- May affect integration performance and API rate limits due to potentially high file operation volumes.\n\n**Implementation guidance**\n- Enable only if direct, comprehensive interaction with NetSuite files is required.\n- Ensure the integration has appropriate NetSuite permissions, roles, and OAuth scopes for secure file access and manipulation.\n- Validate strictly as a boolean to prevent misconfiguration.\n- Assess security implications thoroughly, enforcing strict access controls and audit logging when enabled.\n- Monitor API usage and system performance closely, as file operations can increase load and impact rate limits.\n- Coordinate with NetSuite administrators to align file access policies with organizational security and compliance standards.\n- Consider effects on middleware and downstream systems handling file transfers, error handling, and synchronization.\n\n**Examples**\n- `isFileProvider: true` — Enables full file provider capabilities, allowing browsing, uploading, downloading, and managing files within NetSuite.\n- `isFileProvider: false` — Disables all file-related functionalities, restricting the integration to non-file operations only.\n\n**Important notes**\n- Enabling file provider functionality may require additional NetSuite OAuth scopes, roles, or permissions specifically related to file access.\n- File operations can significantly impact API rate limits and quotas; plan and monitor usage accordingly.\n- This property exclusively controls file management capabilities and does not affect other integration functionalities."},"customFieldMetadata":{"type":"object","description":"Metadata information related to custom fields defined within the NetSuite environment, offering a comprehensive and detailed overview of each custom field's attributes, configuration settings, and operational parameters. This includes critical details such as the field type (e.g., checkbox, select, date, text), label, internal ID, source lists or records for select-type fields, default values, validation rules, display properties, and behavioral flags like mandatory, read-only, or hidden status. The metadata acts as an authoritative reference for accurately managing, validating, and utilizing custom fields during integrations, data processing, and API interactions, ensuring that custom field data is interpreted and handled precisely according to its defined structure, constraints, and business logic. It also enables dynamic adaptation to changes in custom field configurations, supporting robust and flexible integration workflows that maintain data integrity and enforce business rules effectively.\n\n**Field behavior**\n- Provides exhaustive metadata for each custom field, including type, label, internal ID, source references, default values, validation criteria, and display properties.\n- Reflects the current and authoritative configuration and constraints of custom fields within the NetSuite account.\n- Enables dynamic validation, processing, and rendering of custom fields during data import, export, and API operations.\n- Supports understanding of field dependencies, mandatory status, read-only or hidden flags, and other behavioral characteristics.\n- Facilitates enforcement of business rules and data integrity through validation and conditional logic based on metadata.\n\n**Implementation guidance**\n- Ensure this property is consistently populated with accurate, up-to-date, and comprehensive metadata for all relevant custom fields in the integration context.\n- Regularly synchronize and refresh metadata with the NetSuite environment to capture any additions, modifications, or deletions of custom fields.\n- Leverage this metadata to validate data payloads before transmission and to correctly interpret and process received data.\n- Use metadata details to implement field-specific logic, such as enforcing mandatory fields, applying validation rules, handling default values, and respecting read-only or hidden statuses.\n- Consider caching metadata where appropriate to optimize performance, while ensuring mechanisms exist to detect and apply updates promptly.\n\n**Examples**\n- Field type: \"checkbox\", label: \"Is Active\", id: \"custentity_is_active\", mandatory: true\n- Field type: \"select\", label: \"Customer Type\", id: \"custentity_customer_type\", sourceList: \"customerTypes\", readOnly: false\n- Field type: \"date\", label: \"Contract Start Date\", id: \"custentity_contract_start"},"recordType":{"type":"string","description":"recordType specifies the exact type of record within the NetSuite system that the API operation will interact with. This property is critical as it defines the schema, fields, validation rules, workflows, and behaviors applicable to the record being accessed, created, updated, or deleted. By accurately specifying the recordType, the API can correctly interpret the structure of the data payload, enforce appropriate business logic, and route the request to the correct processing module within NetSuite. This ensures precise data manipulation, retrieval, and integration aligned with the intended record context. Proper use of recordType enables seamless interaction with both standard and custom NetSuite records, supporting robust and flexible API operations across diverse business scenarios.\n\n**Field behavior**\n- Defines the specific NetSuite record type for the API request (e.g., customer, salesOrder, invoice).\n- Directly influences validation rules, available fields, permissible operations, and workflows in the request or response.\n- Determines the context, permissions, and business logic applicable to the record.\n- Must be set accurately and consistently to ensure correct processing and avoid errors.\n- Affects how the API interprets, validates, and processes the associated data payload.\n- Supports interaction with both standard and custom record types configured in the NetSuite account.\n- Enables the API to dynamically adapt to different record structures and business processes based on the recordType.\n\n**Implementation guidance**\n- Use the exact NetSuite internal record type identifiers as defined in official NetSuite documentation and account-specific customizations.\n- Validate the recordType value against the list of supported record types before processing the request to prevent errors.\n- Ensure compatibility of the recordType with the specific API endpoint, operation, and NetSuite account configuration.\n- Implement robust error handling for missing, invalid, or unsupported recordType values, providing clear and actionable feedback.\n- Regularly update the recordType list to reflect changes in NetSuite releases, custom record definitions, and account-specific configurations.\n- Consider role-based access and feature enablement that may affect the availability or behavior of certain record types.\n- When dealing with custom records, verify that the recordType matches the custom record’s internal ID and that the API user has appropriate permissions.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"itemFulfillment\"\n- \"vendorBill\"\n- \"purchaseOrder\"\n- \"customRecordType123\" (example of a custom record)\n\n**Important notes**\n- The recordType value is case"},"recordTypeId":{"type":"string","description":"recordTypeId is the unique identifier that specifies the exact type or category of a record within the NetSuite system. It defines the structure, fields, validation rules, workflows, and behavior applicable to the record instance, ensuring that API operations target the correct record schema and processing logic. This identifier is essential for routing requests to the appropriate record handlers, enforcing data integrity, and validating the data according to the specific record type's requirements. Once set for a record instance, the recordTypeId is immutable, as changing it would imply a fundamentally different record type and could lead to inconsistencies or errors in processing.\n\n**Field behavior**\n- Specifies the precise category or type of NetSuite record being accessed or manipulated.\n- Determines the applicable schema, fields, validation rules, workflows, and business logic for the record.\n- Directs API requests to the correct processing logic, endpoint, and validation routines based on record type.\n- Immutable for a given record instance; changing it indicates a different record type and is not permitted.\n- Influences permissions, access control, and integration behaviors tied to the record type.\n- Affects how related records and dependencies are handled within the system.\n\n**Implementation guidance**\n- Use the exact internal string ID or numeric identifier as defined by NetSuite for record types.\n- Validate the recordTypeId against the current list of supported, enabled, and custom NetSuite record types before processing.\n- Ensure consistency and compatibility between recordTypeId and related fields or data structures within the payload.\n- Implement robust error handling for invalid, unsupported, deprecated, or mismatched recordTypeId values.\n- Keep the recordTypeId definitions synchronized with updates, customizations, and configurations from the NetSuite account to avoid mismatches.\n- Consider case sensitivity and formatting rules as dictated by the NetSuite API specifications.\n- When integrating with custom record types, verify that custom IDs conform to naming conventions and are properly registered in the NetSuite environment.\n\n**Examples**\n- \"customer\" — representing a standard customer record type.\n- \"salesOrder\" — representing a sales order record type.\n- \"invoice\" — representing an invoice record type.\n- Numeric identifiers such as \"123\" if the system uses numeric codes for record types.\n- Custom record type IDs defined within a specific NetSuite account, e.g., \"customRecordType_456\".\n- \"employee\" — representing an employee record type.\n- \"purchaseOrder\" — representing a purchase order record type.\n\n**Important notes**"},"retryUpdateAsAdd":{"type":"boolean","description":"A boolean flag that controls whether failed update operations in NetSuite integrations should be automatically retried as add operations. When set to true, if an update attempt fails—commonly because the target record does not exist—the system will attempt to add the record instead. This mechanism helps maintain data synchronization by addressing scenarios where records may have been deleted, never created, or are otherwise missing, thereby reducing manual intervention and minimizing data discrepancies. It ensures smoother integration workflows by providing a fallback strategy specifically for update failures, enhancing resilience and data consistency.\n\n**Field behavior**\n- Governs retry logic exclusively for update operations within NetSuite integrations.\n- When true, triggers an automatic retry as an add operation upon update failure.\n- When false or unset, update failures result in immediate error responses without retries.\n- Facilitates handling of missing or deleted records to improve synchronization accuracy.\n- Does not influence initial add operations or other API call types.\n- Activates retry logic only after detecting a failed update attempt.\n\n**Implementation guidance**\n- Enable this flag when there is a possibility that update requests target non-existent records.\n- Evaluate the potential for duplicate record creation if the add operation is not idempotent.\n- Implement comprehensive error handling and logging to monitor retry attempts and outcomes.\n- Conduct thorough testing in staging or development environments before deploying to production.\n- Coordinate with other retry or error recovery configurations to avoid conflicting behaviors.\n- Ensure accurate detection of update failure causes to apply retries appropriately and avoid unintended consequences.\n\n**Examples**\n- `retryUpdateAsAdd: true` — Automatically retries adding the record if an update fails due to a missing record.\n- `retryUpdateAsAdd: false` — Update failures are reported immediately without retrying as add operations.\n\n**Important notes**\n- Enabling this flag may lead to duplicate records if update failures occur for reasons other than missing records.\n- Use with caution in environments requiring strict data integrity, auditability, and compliance.\n- This flag only modifies retry behavior after an update failure; it does not alter the initial add or update request logic.\n- Accurate failure cause analysis is critical to ensure retries are applied correctly and safely.\n\n**Dependency chain**\n- Relies on execution of update operations against NetSuite records.\n- Works in conjunction with error detection and handling mechanisms to identify failure reasons.\n- Commonly used alongside other retry policies or error recovery settings to enhance integration robustness.\n\n**Technical details**\n- Data type: boolean\n- Default value: false"},"batchSize":{"type":"number","description":"Specifies the number of records to process in a single batch operation when interacting with the NetSuite API, directly influencing the efficiency, performance, and resource utilization of data processing tasks. This parameter controls how many records are sent or retrieved in one API call, balancing throughput with system constraints such as memory usage, network latency, and API rate limits. Proper configuration of batchSize is essential to optimize processing speed while minimizing the risk of errors, timeouts, or throttling by the NetSuite API. Adjusting batchSize allows for fine-tuning of integration workflows to accommodate varying workloads, system capabilities, and API restrictions, ensuring reliable and efficient data synchronization.  \n**Field behavior:**  \n- Determines the count of records included in each batch request to the NetSuite API.  \n- Directly impacts the number of API calls required to complete processing of all records.  \n- Larger batch sizes can improve throughput by reducing the number of calls but may increase memory consumption, processing latency, and risk of timeouts per call.  \n- Smaller batch sizes reduce memory footprint and improve responsiveness but may increase the total number of API calls and associated overhead.  \n- Influences error handling complexity, as failures in larger batches may require more extensive retries or partial processing logic.  \n- Affects how quickly data is processed and synchronized between systems.  \n**Implementation** GUIDANCE:  \n- Select a batch size that balances optimal performance with available system resources, network conditions, and API constraints.  \n- Conduct thorough performance testing with varying batch sizes to identify the most efficient configuration for your specific workload and environment.  \n- Ensure the batch size complies with NetSuite API limits, including maximum records per request and rate limiting policies.  \n- Implement robust error handling to manage partial failures within batches, including retry mechanisms, logging, and potential batch splitting.  \n- Consider the complexity, size, and processing time of individual records when determining batch size, as more complex or larger records may require smaller batches.  \n- Monitor runtime metrics and adjust batchSize dynamically if possible, based on system load, API response times, and error rates.  \n**Examples:**  \n- 100 (process 100 records per batch for balanced throughput and resource use)  \n- 500 (process 500 records per batch to maximize throughput in high-capacity environments)  \n- 50 (smaller batch size suitable for environments with limited memory, stricter API limits, or higher error sensitivity)  \n**Important notes:**  \n- The batchSize setting"},"internalIdLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Extract the relevant internal ID from the provided NetSuite data response to uniquely identify and reference specific records for subsequent API operations. This involves accurately parsing the response data—whether in JSON, XML, or other formats—to isolate the internal ID, which serves as a critical key for actions such as updates, deletions, or detailed queries within NetSuite. The extraction process must be precise, resilient to variations in response structures, and capable of handling nested or complex data formats to ensure reliable downstream processing and prevent errors in record manipulation.  \n**Field behavior:**  \n- Extracts a unique internal ID value from NetSuite API responses or data objects.  \n- Acts as the primary identifier for referencing and manipulating NetSuite records.  \n- Typically invoked after lookup, search, or query operations to obtain the necessary ID for further API calls.  \n- Supports handling of multiple data formats including JSON, XML, and potentially others.  \n- Ensures the extracted ID is valid, correctly formatted, and usable for subsequent operations.  \n- Handles nested or complex response structures to reliably locate the internal ID.  \n- Operates as a read-only field derived exclusively from API response data.  \n\n**Implementation guidance:**  \n- Implement robust and flexible parsing logic tailored to the expected NetSuite response formats.  \n- Validate the extracted internal ID against expected patterns (e.g., numeric strings) to ensure correctness.  \n- Incorporate comprehensive error handling for cases where the internal ID is missing, malformed, or unexpected.  \n- Design the extraction mechanism to be configurable and adaptable to changes in NetSuite API response schemas or record types.  \n- Support asynchronous processing if API responses are received asynchronously or in streaming formats.  \n- Log extraction attempts and failures to facilitate debugging and maintenance.  \n- Regularly update extraction logic to accommodate API version changes or schema updates.  \n\n**Examples:**  \n- Extracting `\"internalId\": \"12345\"` from a JSON response such as `{ \"internalId\": \"12345\", \"name\": \"Sample Record\" }`.  \n- Parsing nested XML elements or attributes to retrieve the internalId value, e.g., `<record internalId=\"12345\">`.  \n- Using the extracted internal ID to perform update, delete, or detailed fetch operations on a NetSuite record.  \n- Handling cases where the internal ID is embedded within deeply nested or complex response objects.  \n\n**Important notes:**  \n- The internal ID uniquely identifies records within NetSuite and is essential for accurate and reliable API"},"searchField":{"type":"string","description":"The specific field within a NetSuite record that serves as the key attribute for performing an internal ID lookup search. This field determines which property of the record will be queried to locate and retrieve the corresponding internal ID, thereby directly influencing the precision, relevance, and efficiency of the lookup operation. Selecting an appropriate searchField is critical for ensuring accurate matches and optimal performance, as it typically should be a unique, indexed, or otherwise optimized attribute within the record schema. The choice of searchField impacts not only the speed of the search but also the uniqueness and reliability of the results returned, making it essential to align with the record type, user permissions, and any customizations present in the NetSuite environment. Proper selection of this field helps avoid ambiguous results and enhances the overall robustness of the lookup process.\n\n**Field behavior**\n- Identifies the exact attribute of the NetSuite record to be searched.\n- Directs the lookup process to compare the provided search value against this field.\n- Affects the accuracy, relevance, and uniqueness of the internal ID retrieved.\n- Commonly corresponds to fields that are indexed or uniquely constrained for efficient querying.\n- Determines the scope and filtering of the search within the record type.\n- Influences how the system handles multiple matches or ambiguous results.\n- Must correspond to a searchable field that the user has permission to access.\n\n**Implementation guidance**\n- Use only valid, supported field names as defined in the NetSuite record schema.\n- Prefer fields that are indexed or optimized for search to enhance lookup speed.\n- Verify the existence and searchability of the field on the specific record type being queried.\n- Choose fields that minimize duplicates to avoid ambiguous or multiple results.\n- Ensure exact case sensitivity and spelling alignment with NetSuite’s API definitions.\n- Test the field’s searchability considering user permissions, record configurations, and any customizations.\n- Avoid fields that are write-only, restricted, or non-searchable due to NetSuite permissions or settings.\n- Consider the impact of custom fields or scripts that may alter standard field behavior.\n- Validate that the field supports the type of search operation intended (e.g., exact match, partial match).\n\n**Examples**\n- \"externalId\" — to locate records by an externally assigned identifier.\n- \"email\" — to find records using an email address attribute.\n- \"name\" — to search by the record’s name or title field.\n- \"tranId\" — to query transaction records by their transaction ID.\n- \"entityId"},"expression":{"type":"string","description":"A string representing a search expression used to filter or query records within the NetSuite system based on specific criteria. This expression defines the precise conditions that records must meet to be included in the search results, enabling targeted and efficient data retrieval. It supports a rich syntax including logical operators (AND, OR, NOT), comparison operators (=, !=, >, <, >=, <=), field references, and literal values. Expressions can be nested to form complex queries tailored to specific business needs, allowing for granular control over the search logic. This property is essential for performing internal lookups by matching records against the defined criteria, ensuring that only relevant records are returned.\n\n**Field behavior**\n- Defines the criteria for searching or filtering records by specifying one or more conditions.\n- Supports logical operators such as AND, OR, and NOT to combine multiple conditions logically.\n- Allows inclusion of field names, comparison operators, and literal values to construct meaningful expressions.\n- Enables nested expressions for advanced querying capabilities.\n- Used internally to perform lookups by matching records against the defined expression.\n- Returns records that satisfy all specified conditions within the expression.\n- Evaluates expressions dynamically at runtime to reflect current data states.\n- Supports referencing related record fields to enable cross-record filtering.\n\n**Implementation guidance**\n- Ensure the expression syntax strictly conforms to NetSuite’s search expression format and supported operators.\n- Validate the expression prior to execution to catch syntax errors and prevent runtime failures.\n- Use parameterized values or safe input handling to mitigate injection risks and enhance security.\n- Support and correctly parse nested expressions to handle complex query scenarios.\n- Implement graceful handling for cases where the expression yields no matching records, avoiding errors.\n- Optimize expressions for performance by limiting complexity and avoiding overly broad criteria.\n- Provide clear error messages or feedback when expressions are invalid or unsupported.\n- Consider caching frequently used expressions to improve lookup efficiency.\n\n**Examples**\n- `\"status = 'Open' AND type = 'Sales Order'\"`\n- `\"createdDate >= '2023-01-01' AND createdDate <= '2023-12-31'\"`\n- `\"customer.internalId = '12345' OR customer.email = 'example@example.com'\"`\n- `\"NOT (status = 'Closed' OR status = 'Cancelled')\"`\n- `\"amount > 1000 AND (priority = 'High' OR priority = 'Urgent')\"`\n- `\"item.category = 'Electronics' AND quantity >= 10\"`\n- `\"lastModifiedDate >"}},"description":"internalIdLookup is a boolean property that specifies whether the lookup operation should utilize the unique internal ID assigned by NetSuite to an entity for record retrieval. When set to true, the system treats the provided identifier strictly as this immutable internal ID, enabling precise and unambiguous access to the exact record. This approach is essential in scenarios demanding high accuracy and reliability, as internal IDs are guaranteed to be unique within the NetSuite environment and remain consistent over time. If set to false or omitted, the lookup defaults to using external identifiers, display names, or other non-internal keys, which may be less precise and could lead to multiple matches or ambiguity in the results.\n\n**Field behavior**\n- Activates strict matching using NetSuite’s unique internal ID when true.\n- Defaults to external or display identifier matching when false or not specified.\n- Changes the lookup logic and matching criteria based on the flag’s value.\n- Ensures efficient, accurate, and unambiguous record retrieval by leveraging stable internal identifiers.\n\n**Implementation guidance**\n- Enable only when the exact internal ID of the target record is known, verified, and appropriate for the lookup.\n- Avoid setting to true if only external or display identifiers are available to prevent failed or incorrect lookups.\n- Confirm that the internal ID corresponds to the correct record type to avoid mismatches or errors.\n- Validate the format and existence of the internal ID before performing the lookup operation.\n- Implement comprehensive error handling to manage cases where the internal ID is invalid, missing, or does not correspond to any record.\n\n**Examples**\n- `internalIdLookup: true` — Retrieve a customer record by its NetSuite internal ID for precise identification.\n- `internalIdLookup: false` — Search for an inventory item using its external SKU or descriptive name.\n- Omitting `internalIdLookup` defaults to false, causing the system to perform lookups based on external or display identifiers.\n\n**Important notes**\n- Internal IDs are stable, unique, and system-generated identifiers within NetSuite, providing the most reliable reference for records.\n- Using internal IDs can improve lookup performance and reduce ambiguity compared to relying on external or display identifiers.\n- Incorrectly setting this flag to true with non-internal IDs will likely result in lookup failures or no matching records found.\n- This property is specific to NetSuite integrations and may not be applicable or recognized in other systems or contexts.\n- Ensure synchronization between the internal ID used and the expected record type to maintain data integrity and"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"ignoreReadOnlyFields specifies whether the system should bypass any attempts to modify fields designated as read-only during data processing or update operations. When set to true, the system silently ignores changes to these protected fields, preventing errors or exceptions that would otherwise occur if such modifications were attempted. This behavior facilitates smoother integration and more resilient data handling, especially in environments where partial updates are common or where read-only constraints are strictly enforced. Conversely, when set to false, the system will attempt to update all fields, including those marked as read-only, which may result in errors or exceptions if such modifications are not permitted.  \n**Field behavior:**  \n- Controls whether read-only fields are excluded from update or modification attempts.  \n- When true, modifications to read-only fields are skipped silently without raising errors or exceptions.  \n- When false, attempts to modify read-only fields may cause errors, exceptions, or transaction failures.  \n- Helps maintain system stability by preventing unauthorized or unsupported changes to protected data.  \n- Influences how partial updates and patch operations are processed in the presence of read-only constraints.  \n- Affects error reporting and feedback mechanisms during data validation and update workflows.  \n\n**Implementation guidance:**  \n- Enable this flag to enhance robustness and fault tolerance when integrating with APIs or systems enforcing read-only field restrictions.  \n- Use in scenarios where it is acceptable or expected that read-only fields remain unchanged during update operations.  \n- Ensure downstream business logic, validation rules, and data consistency checks accommodate the possibility that some fields may not be updated.  \n- Carefully evaluate the impact on data integrity, audit requirements, and compliance before enabling this behavior.  \n- Coordinate with audit, logging, and monitoring mechanisms to accurately capture which fields were updated, skipped, or ignored.  \n- Consider the implications for user feedback and error handling in client applications consuming the API.  \n\n**Examples:**  \n- ignoreReadOnlyFields: true — Update operations will silently skip read-only fields, avoiding errors and allowing partial updates to succeed.  \n- ignoreReadOnlyFields: false — Update operations will attempt to modify all fields, potentially triggering errors or exceptions if read-only fields are included.  \n- In a bulk update scenario, setting this to true prevents the entire batch from failing due to attempts to modify read-only fields.  \n- When performing a patch request, enabling this flag allows the system to apply only permissible changes without interruption.  \n\n**Important notes:**  \n- Setting this flag to true does not grant permission to"},"warningAsError":{"type":"boolean","description":"Indicates whether warnings encountered during processing should be escalated and treated as errors, causing the operation to fail immediately upon any warning detection. This setting enforces stricter validation and operational integrity by preventing processes from completing successfully if any warnings arise, thereby ensuring higher data quality, compliance standards, and system reliability. When enabled, it transforms non-critical warnings into blocking errors, which can halt workflows, trigger rollbacks, or initiate error handling routines. This behavior is particularly valuable in environments where data accuracy and regulatory adherence are paramount, but it may also increase failure rates and require more robust error management and user communication strategies.\n\n**Field behavior**\n- When set to true, any warning triggers an error state, halting or failing the operation immediately.\n- When set to false, warnings are logged or reported but do not interrupt the workflow or cause failure.\n- Influences validation, data processing, integration, and transactional workflows where warnings might otherwise be non-blocking.\n- Affects error reporting mechanisms and may alter the flow of retries, rollbacks, or aborts.\n- Can change the overall system behavior by elevating the severity of warnings to errors.\n- Impacts downstream processes by potentially preventing partial or inconsistent data states.\n\n**Implementation guidance**\n- Use to enforce strict data integrity and prevent continuation of processes with potential issues.\n- Assess the impact on user experience, as enabling this may increase failure rates and necessitate enhanced error handling and user notifications.\n- Ensure client applications and integrations are designed to handle errors resulting from escalated warnings gracefully.\n- Clearly document the default behavior when this flag is unset to avoid ambiguity for users and developers.\n- Coordinate with logging, monitoring, and alerting systems to provide clear diagnostics and traceability when warnings are treated as errors.\n- Consider providing configuration options at multiple levels (user, project, global) to allow flexible enforcement.\n- Test thoroughly in staging environments to understand the operational impact before enabling in production.\n\n**Examples**\n- warningAsError: true — Data import fails immediately if any warnings are detected during validation, preventing partial or potentially corrupt data ingestion.\n- warningAsError: false — Warnings during data validation are logged but do not stop the import process, allowing for continued operation despite minor issues.\n- warningAsError: true — Integration workflows abort on any warning to maintain strict compliance with regulatory or business rules.\n- warningAsError: false — Batch processing continues despite warnings, with issues flagged for later review.\n- warningAsError: true —"},"skipCustomMetadataRequests":{"type":"boolean","description":"Indicates whether to bypass requests for custom metadata during API operations, enabling optimized performance by reducing unnecessary data retrieval and processing when custom metadata is not required. This flag allows fine-tuning of the API's behavior to specific use cases by controlling the inclusion of potentially resource-intensive custom metadata. By skipping these requests, the system can achieve faster response times, lower bandwidth usage, and reduced computational overhead, especially beneficial in high-throughput or performance-sensitive scenarios. Conversely, including custom metadata ensures comprehensive data context, which is critical for operations requiring full validation, auditing, or enrichment.  \n**Field behavior:**  \n- When set to true, the system omits fetching and processing any custom metadata associated with the request, improving efficiency and reducing payload size.  \n- When set to false or left unspecified, the system retrieves and processes all relevant custom metadata, ensuring complete data context.  \n- Directly influences the amount of data transmitted and processed, impacting API responsiveness and resource utilization.  \n- Affects downstream workflows that depend on the presence or absence of custom metadata for decision-making or data integrity.  \n**Implementation guidance:**  \n- Employ this flag to optimize performance in scenarios where custom metadata is unnecessary, such as bulk data exports, read-only queries, or environments with stable metadata.  \n- Assess the potential impact on data completeness, validation, auditing, and enrichment to avoid compromising critical business logic.  \n- Clearly communicate the default behavior when the property is omitted to prevent misunderstandings among API consumers.  \n- Enforce boolean validation to maintain consistent and predictable API behavior.  \n- Consider how this setting interacts with caching mechanisms, data synchronization, and other metadata-related preferences to ensure coherent system behavior.  \n**Examples:**  \n- `true`: Skip all custom metadata requests to expedite processing and minimize resource consumption in performance-critical workflows.  \n- `false`: Include custom metadata requests to obtain full metadata details necessary for thorough data handling and validation.  \n**Important notes:**  \n- Skipping custom metadata may lead to incomplete data contexts if downstream processes rely on that metadata for essential functions like validation or auditing.  \n- This setting is most effective when metadata is static, infrequently changed, or irrelevant to the current operation.  \n- Altering this flag can influence caching strategies, data consistency, and synchronization across distributed systems.  \n- Ensure all relevant stakeholders understand the implications of toggling this flag to maintain data integrity and operational correctness.  \n**Dependency chain:**  \n- Relies on the existence and"}},"description":"An object encapsulating the user's customizable settings and options within the NetSuite environment, enabling a highly personalized and efficient user experience. This object encompasses a broad spectrum of configurable preferences including interface layout, language, timezone, notification methods, default currencies, date and time formats, themes, and other user-specific options that directly influence how the system behaves, appears, and interacts with the individual user. Preferences are designed to be flexible, supporting inheritance from role-based or system-wide defaults while allowing users to override settings to suit their unique workflows and requirements. The preferences object supports dynamic retrieval and partial updates, ensuring that changes can be made granularly without affecting unrelated settings, thereby maintaining data integrity and user experience consistency.\n\n**Field behavior**\n- Contains user-specific configuration settings that tailor the NetSuite experience to individual needs and roles.\n- Includes preferences related to UI layout, notification settings, default currencies, date/time formats, language, themes, and other personalization options.\n- Supports dynamic retrieval and partial updates, enabling users to modify individual preferences without overwriting the entire object.\n- Allows inheritance of preferences from role-based or system-wide defaults when user-specific settings are not explicitly defined.\n- Changes to preferences immediately affect the user interface, notification delivery, data presentation, and overall system behavior.\n- Preferences persist across sessions and devices, ensuring a consistent user experience.\n- Supports both simple scalar values and complex nested structures to accommodate diverse configuration needs.\n\n**Implementation guidance**\n- Structure as a nested JSON object with clearly defined and documented sub-properties grouped by categories such as notifications, display settings, localization, and system defaults.\n- Validate all input during updates to ensure data integrity, prevent invalid configurations, and maintain system stability.\n- Implement partial update mechanisms (e.g., PATCH semantics) to allow granular and efficient modifications.\n- Enforce strict access controls to ensure only authorized users can view or modify preferences.\n- Consider versioning the preferences schema to support backward compatibility and future enhancements.\n- Provide comprehensive documentation for each sub-property to facilitate correct usage, integration, and maintenance.\n- Optimize retrieval and update operations for performance, especially in environments with large user bases.\n- Ensure compatibility with role-based access controls and system-wide default settings to maintain coherent preference hierarchies.\n\n**Examples**\n- `{ \"language\": \"en-US\", \"timezone\": \"America/New_York\", \"currency\": \"USD\", \"notifications\": { \"email\": true, \"sms\": false } }`\n- `{ \"dashboardLayout\": \"compact\", \""},"file":{"type":"object","properties":{"name":{"type":"string","description":"The name of the file as it will be identified, stored, and managed within the NetSuite system. This filename serves as the primary unique identifier for the file within its designated folder or context, ensuring precise referencing, retrieval, and manipulation across various NetSuite modules, integrations, and user interfaces. It is critical that the filename is clear, descriptive, and relevant, as it is used not only for backend operations but also for display purposes in reports, dashboards, and file listings. The filename must strictly adhere to NetSuite’s naming conventions and restrictions to prevent conflicts, errors, or access issues, including limitations on character usage and length. Proper naming facilitates efficient file organization, version control, and seamless integration with external systems.\n\n**Field behavior**\n- Acts as the unique identifier for the file within its folder or storage context, preventing duplication.\n- Used extensively in file retrieval, linking, display, and management operations throughout NetSuite.\n- Must be unique within the folder to avoid overwriting or access conflicts.\n- Supports inclusion of standard file extensions to indicate file type and enable appropriate handling.\n- Case sensitivity may vary depending on the operating environment and integration specifics.\n- Represents only the filename without any directory or path information.\n- Changes to the filename after creation can impact existing references or integrations.\n\n**Implementation guidance**\n- Choose a concise, descriptive, and meaningful name to facilitate easy identification and management.\n- Avoid special characters, symbols, or reserved characters (e.g., \\ / : * ? \" < > |) disallowed by NetSuite’s file naming rules.\n- Ensure the filename length complies with NetSuite’s maximum character limit (typically 255 characters).\n- Always include the appropriate file extension (e.g., .pdf, .docx, .png) to clearly denote the file format.\n- Exclude any path separators, folder names, or hierarchical data—only the filename itself should be provided.\n- Adopt consistent naming conventions that support version control, categorization, or organizational standards.\n- Validate filenames programmatically where possible to enforce compliance and prevent errors.\n- Consider the impact of renaming files on existing workflows and references before making changes.\n\n**Examples**\n- \"invoice_2024_06.pdf\"\n- \"project_plan_v2.docx\"\n- \"company_logo.png\"\n- \"financial_report_Q1_2024.xlsx\"\n- \"user_manual_v1.3.pdf\"\n\n**Important notes**\n- The filename must not contain directory or path information; it strictly represents"},"fileType":{"type":"string","description":"The fileType property specifies the precise format or category of the file managed within the NetSuite file system, such as PDF, CSV, JPEG, or other supported types. This designation is essential because it directly affects how the system processes, displays, stores, and interacts with the file throughout its lifecycle. Correctly identifying the fileType ensures that files are rendered correctly in the user interface, processed through appropriate workflows, and subjected to any file-specific restrictions, permissions, or validations. It is typically required when uploading or creating new files to guarantee compatibility, prevent errors during file operations, and maintain data integrity. Additionally, the fileType influences system behaviors such as indexing, searchability, and integration with other NetSuite modules or external systems.\n\n**Field behavior**\n- Defines the file’s format or category, governing system handling, processing, and display.\n- Determines rendering, storage methods, and permissible manipulations within NetSuite.\n- Enables or restricts specific operations based on the file type’s inherent capabilities and limitations.\n- Usually mandatory during file creation or upload to ensure accurate system recognition and handling.\n- Influences validation checks, error handling, and workflow routing during file operations.\n- Affects integration points with other NetSuite features, such as reporting, scripting, or external API interactions.\n\n**Implementation guidance**\n- Validate the fileType against a predefined list of supported NetSuite file types to ensure compatibility and prevent errors.\n- Ensure the fileType accurately reflects the actual content format of the file to avoid processing failures or data corruption.\n- Prefer using standardized MIME types or NetSuite-specific identifiers where applicable for consistency and interoperability.\n- Implement robust error handling to gracefully manage unsupported, unrecognized, or mismatched file types.\n- Consider system versioning, configuration differences, and customizations that may affect supported file types.\n- When changing fileType post-creation, ensure appropriate reprocessing or format conversion to maintain data integrity.\n\n**Examples**\n- \"PDF\" for Portable Document Format files commonly used for documents.\n- \"CSV\" for Comma-Separated Values files frequently used for data exchange.\n- \"JPG\" or \"JPEG\" for image files in JPEG format.\n- \"PLAINTEXT\" for plain text files without formatting.\n- \"EXCEL\" for Microsoft Excel spreadsheet files.\n- \"XML\" for Extensible Markup Language files used in data interchange.\n- \"ZIP\" for compressed archive files.\n\n**Important notes**\n- Consistency between the fileType and the actual file content"},"folder":{"type":"string","description":"The identifier or path of the folder within the NetSuite file cabinet where the file is stored or intended to be stored. This property precisely defines the file's storage location, enabling organized management, categorization, and efficient retrieval within the NetSuite environment. It supports specification either as a unique numeric folder ID or as a string representing the folder path, accommodating both absolute and relative references depending on the API context. Proper assignment ensures files are correctly placed within the hierarchical folder structure, facilitating access control, streamlined file operations, and maintaining organizational consistency.\n\n**Field behavior**\n- Specifies the exact folder location for storing or moving the file within the NetSuite file cabinet.\n- Accepts either a numeric folder ID for unambiguous identification or a string folder path for hierarchical referencing.\n- Mandatory when uploading new files or relocating existing files to define their destination.\n- Optional during file metadata retrieval if folder context is implicit or not required.\n- Updating this property on an existing file triggers relocation to the specified folder.\n- Influences file visibility and access permissions based on folder-level security settings.\n- Supports both absolute and relative folder path formats depending on API usage context.\n\n**Implementation guidance**\n- Confirm the target folder exists and is accessible within the NetSuite file cabinet before assignment.\n- Prefer using the internal numeric folder ID to avoid ambiguity and ensure precise targeting.\n- Support both absolute and relative folder paths where the API context allows.\n- Enforce permission validation to verify that the user or integration has adequate rights to access or modify the folder.\n- Normalize folder paths to comply with NetSuite’s hierarchical structure and naming conventions.\n- Provide informative error responses if the folder is invalid, non-existent, or access is denied.\n- When moving files by updating this property, ensure that any dependent metadata or references are updated accordingly.\n\n**Examples**\n- \"123\" (numeric folder ID representing a specific folder)\n- \"/Documents/Invoices\" (string folder path indicating a nested folder structure)\n- \"456\" (numeric folder ID for a project-specific folder)\n- \"Shared/Marketing/Assets\" (relative folder path within the file cabinet hierarchy)\n\n**Important notes**\n- Folder IDs are unique within the NetSuite account and are the recommended method for precise folder referencing.\n- Folder paths must conform to NetSuite’s naming standards and reflect the correct folder hierarchy.\n- Changing the folder property on an existing file effectively moves the file within the file cabinet.\n- Adequate permissions are required to access or modify the specified folder;"},"folderInternalId":{"type":"string","description":"The internal identifier of the folder within the NetSuite file cabinet where the file is stored. This unique, system-generated integer ID precisely specifies the folder location, enabling accurate file operations such as upload, download, retrieval, and organization. It is essential for associating files with their respective folders and maintaining the hierarchical structure of the file cabinet, ensuring consistent file management and access control.\n\n**Field behavior**\n- Represents a unique, system-assigned internal ID for a folder in the NetSuite file cabinet.\n- Required to specify or retrieve the exact folder location during file management operations.\n- Must correspond to an existing folder within the NetSuite account to ensure valid file placement.\n- Changing this ID effectively moves the file to a different folder within the file cabinet.\n- Permissions on the folder influence access and operations on the associated files.\n- The field is mandatory when creating or updating a file to define its storage location.\n- Supports hierarchical folder structures by linking files to nested folders via their internal IDs.\n\n**Implementation guidance**\n- Validate that the folderInternalId is a valid integer and corresponds to an existing folder before use.\n- Retrieve valid folder internal IDs through NetSuite’s SuiteScript, REST API, or UI to avoid errors.\n- Implement error handling for cases where the folderInternalId is missing, invalid, or points to a restricted folder.\n- Update this ID carefully, as it changes the file’s folder location and may affect file accessibility.\n- Ensure that the user or integration has appropriate permissions to access or modify the target folder.\n- When moving files between folders, confirm that the destination folder supports the file type and intended use.\n- Cache or store folderInternalId values cautiously to prevent stale references in dynamic folder structures.\n\n**Examples**\n- 12345\n- 67890\n- 101112\n\n**Important notes**\n- This ID is distinct from the folder name; it is a system-generated unique identifier.\n- Modifying the folderInternalId moves the file to a different folder, impacting file organization.\n- Folder permissions and sharing settings can restrict or enable file operations based on this ID.\n- Accurate use of this ID is critical for maintaining the integrity of the file cabinet’s folder hierarchy.\n- The folderInternalId cannot be arbitrarily assigned; it must be obtained from NetSuite’s system.\n- Changes to folder structure or deletion of folders may invalidate existing folderInternalId references.\n\n**Dependency chain**\n- Depends on the existence and structure of folders within the NetSuite"},"internalId":{"type":"string","description":"Unique identifier assigned internally to a file within the NetSuite system, serving as the primary key for that file record. This identifier is essential for programmatic referencing and management of the file, enabling precise operations such as retrieval, update, or deletion. The internalId is immutable once assigned and guaranteed to be unique across all file records, ensuring accurate and consistent identification. It is automatically generated by NetSuite upon file creation and is consistently used across both REST and SOAP API endpoints to perform file-related actions. This identifier is critical for maintaining data integrity, enabling seamless integration, and supporting reliable file management workflows within the NetSuite ecosystem.\n\n**Field behavior**\n- Acts as the definitive and immutable identifier for a file within NetSuite.\n- Must be unique for each file record to prevent conflicts.\n- Required for all API operations targeting a specific file, including retrieval, updates, and deletions.\n- Cannot be manually assigned, duplicated, or reassigned to another file.\n- Remains constant throughout the lifecycle of the file record.\n\n**Implementation guidance**\n- Always retrieve the internalId from NetSuite responses after file creation or queries.\n- Use the internalId exclusively for all subsequent API calls involving the file.\n- Avoid any manual assignment or modification of the internalId to prevent inconsistencies.\n- Validate that the internalId conforms to NetSuite’s expected format before use in API calls.\n- Handle errors gracefully when an invalid or non-existent internalId is provided.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"1001\"\n\n**Important notes**\n- Providing an incorrect or non-existent internalId will result in API errors or failed operations.\n- The internalId is distinct and separate from the file name, file path, or any external identifiers.\n- It plays a critical role in maintaining data integrity and ensuring accurate file referencing within NetSuite.\n- The internalId cannot be reused or recycled for different files.\n\n**Dependency chain**\n- Requires the existence of the corresponding file record within NetSuite.\n- Utilized by API methods such as get, update, and delete for file management operations.\n- May be referenced by other NetSuite entities or records that link to the file.\n- Dependent on NetSuite’s internal database and indexing mechanisms for uniqueness and retrieval.\n\n**Technical details**\n- Represented as a string or numeric value depending on the API context.\n- Automatically assigned by NetSuite during the file creation process.\n- Stored as the primary key in the Net"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the unique internal identifier assigned by NetSuite to a specific folder within the file cabinet designated exclusively for storing backup files. This identifier is essential for accurately locating, managing, and organizing backup data within the system’s hierarchical file storage structure. It ensures that all backup operations—such as file creation, modification, and retrieval—are precisely targeted to the correct folder, thereby facilitating reliable data preservation, streamlined backup workflows, and efficient recovery processes.  \n**Field behavior:**  \n- Uniquely identifies the backup folder within the NetSuite file cabinet, eliminating ambiguity in folder selection.  \n- Used to specify or retrieve the exact storage location for backup files during automated or manual backup operations.  \n- Typically assigned and referenced during backup configuration, file management tasks, or programmatic interactions with the NetSuite API.  \n- Enables seamless integration with automated backup processes by directing files to the intended folder without manual intervention.  \n**Implementation guidance:**  \n- Verify that the internal ID corresponds to an existing, active, and accessible folder within the NetSuite file cabinet before use.  \n- Ensure the designated folder has appropriate permissions set to allow creation, modification, and retrieval of backup files.  \n- Avoid using internal IDs of folders that are restricted, archived, deleted, or otherwise unsuitable for active backup storage to prevent errors.  \n- Maintain consistent use of this ID across all API calls, scripts, and backup configurations to uphold data integrity and organizational consistency.  \n**Examples:**  \n- \"12345\" (example of a valid internal folder ID)  \n- \"67890\"  \n- \"34567\"  \n**Important notes:**  \n- This identifier is internal to NetSuite and does not correspond to external folder names, display names, or file system paths.  \n- Changing this ID will redirect backup files to a different folder, which may disrupt existing backup workflows or data organization.  \n- Proper folder permissions are essential to avoid backup failures, access denials, or data loss.  \n- Once set for a given backup configuration, this ID should be treated as immutable unless a deliberate and well-documented change is required.  \n**Dependency chain:**  \n- Requires the target folder to exist within the NetSuite file cabinet and be properly configured for backup storage.  \n- Interacts closely with backup configuration settings, file management APIs, and NetSuite’s permission and security models.  \n- Dependent on NetSuite’s internal folder management system to maintain uniqueness and integrity of folder"}},"description":"Represents a comprehensive file object within the NetSuite system, serving as a fundamental entity for uploading, storing, managing, and referencing a wide variety of file types including documents, images, spreadsheets, audio, video, and other media formats. This object facilitates seamless integration and association of file data with NetSuite records, transactions, custom entities, and workflows, enabling efficient document management, retrieval, and automation across the platform. It supports detailed metadata management such as file name, type, size, folder location, internal identifiers, encoding formats, and timestamps, ensuring precise control, traceability, and auditability of files. The file object can represent both files stored internally in the NetSuite file Cabinet and externally referenced files via URLs or other means, providing flexibility in file handling and access. It enforces platform-specific constraints on file size, type, and permissions to maintain system integrity, security, and compliance with organizational policies. Additionally, it supports versioning and updates to existing files, allowing for effective file lifecycle management within NetSuite.\n\n**Field behavior**\n- Used to specify, upload, retrieve, update, or link files associated with NetSuite records, transactions, custom entities, or workflows.\n- Supports a broad spectrum of file types including PDFs, images (JPEG, PNG, GIF), spreadsheets (Excel, CSV), text documents, audio, video, and other media formats.\n- Can represent files stored internally within the NetSuite file Cabinet or externally referenced files via URLs or other means.\n- Enables attachment of files to records to provide enhanced context, documentation, and audit trails.\n- Manages comprehensive file metadata including internalId, name, fileType, size, folder location, encoding, and timestamps.\n- Supports versioning and updates to existing files where applicable.\n- Enforces platform-specific restrictions on file size, type, and access permissions.\n- Facilitates secure file access and sharing based on user roles and permissions.\n\n**Implementation guidance**\n- Include complete and accurate metadata such as internalId, name, fileType, size, folder, encoding, and timestamps to ensure reliable file referencing and management.\n- Validate file types and sizes against NetSuite’s platform restrictions prior to upload to prevent errors and ensure compliance.\n- Use appropriate encoding methods (e.g., base64) for file content during API transmission to maintain data integrity.\n- Employ multipart/form-data encoding for file uploads when required by the API specifications.\n- Ensure users have the necessary roles and permissions to upload, retrieve, or modify files"}}},"NetsuiteDistributed":{"type":"object","description":"Configuration for NetSuite Distributed (SuiteApp 2.0) import operations.\nThis is the primary sub-schema for NetSuiteDistributedImport adaptorType.\nThe API field name is \"netsuite_da\".\n\n**Key fields**\n- operation: REQUIRED — the NetSuite operation (add, update, addupdate, etc.)\n- recordType: REQUIRED — the NetSuite record type (e.g., \"customer\", \"salesOrder\")\n- mapping: field mappings from source data to NetSuite fields\n- internalIdLookup: how to find existing records for update/addupdate/delete\n- restletVersion: defaults to \"suiteapp2.0\", rarely needs to be changed\n","properties":{"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"The NetSuite operation to perform. REQUIRED. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create new records only.\n- \"update\" — Update existing records. Requires internalIdLookup.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. Most common.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete records. Requires internalIdLookup.\n\n**Guidance**\n- Default to \"addupdate\" when user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when user says \"create\" or \"insert\" without mentioning updates.\n- For \"update\", \"addupdate\", and \"delete\", you MUST also set internalIdLookup.\n\n**Important**\nThis field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\nDo NOT send {\"type\": \"addupdate\"} — that causes a Cast error.\n"},"recordType":{"type":"string","description":"The NetSuite record type to import into. REQUIRED.\n\nMust match a valid NetSuite record type identifier.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"purchaseOrder\"\n- \"vendorBill\"\n- \"itemFulfillment\"\n- \"employee\"\n- \"inventoryItem\"\n- \"customrecord_myrecord\" (custom record types)\n"},"recordIdentifier":{"type":"string","description":"Custom record identifier, used to identify the specific record type when\nimporting into custom record types.\n"},"recordTypeId":{"type":"string","description":"The internal record type ID. Used for custom record types in NetSuite where\nthe numeric ID is needed in addition to the recordType string.\n"},"restletVersion":{"type":"string","enum":["suitebundle","suiteapp1.0","suiteapp2.0"],"description":"The version of the NetSuite RESTlet to use. This is a plain STRING, NOT an object.\n\nDefaults to \"suiteapp2.0\" when useSS2Restlets is true, \"suitebundle\" otherwise.\nAlmost always \"suiteapp2.0\" for modern integrations — rarely needs to be explicitly set.\n\nIMPORTANT: Set as a plain string value, e.g. \"suiteapp2.0\"\nDo NOT send {\"type\": \"suiteapp2.0\"} — that causes a Cast error.\n"},"useSS2Restlets":{"type":"boolean","description":"Whether to use SuiteScript 2.0 RESTlets. Defaults to true for modern integrations.\nWhen true, restletVersion defaults to \"suiteapp2.0\".\n"},"missingOrCorruptedDAConfig":{"type":"boolean","description":"Flag indicating whether the Distributed Adaptor configuration is missing or corrupted.\nSet by the system — do not set manually.\n"},"batchSize":{"type":"number","description":"Number of records to process per batch. Controls how many records are sent\nto NetSuite in a single API call. Typical values: 50-200.\n"},"internalIdLookup":{"type":"object","description":"Configuration for looking up existing NetSuite records by internal ID.\nREQUIRED when operation is \"update\", \"addupdate\", or \"delete\".\n\nDefines how to find existing records in NetSuite to match against incoming data.\n","properties":{"extract":{"type":"string","description":"The path in the source record to extract the lookup value from.\nExample: \"internalId\" or \"externalId\"\n"},"searchField":{"type":"string","description":"The NetSuite field to search against.\nExample: \"externalId\", \"email\", \"name\", \"tranId\"\n"},"operator":{"type":"string","description":"The comparison operator for the lookup.\nExample: \"is\", \"contains\", \"startswith\"\n"},"expression":{"type":"string","description":"A NetSuite search expression for complex lookup conditions.\nUsed for multi-field or conditional lookups.\n"}}},"hooks":{"type":"object","description":"Script hooks for custom processing at different stages of the import.\nEach hook references a SuiteScript file and function.\n","properties":{"preMap":{"type":"object","description":"Runs before field mapping is applied.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postMap":{"type":"object","description":"Runs after field mapping, before submission to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postSubmit":{"type":"object","description":"Runs after the record is submitted to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}}}},"mapping":{"type":"object","description":"Field mappings that define how source data fields map to NetSuite record fields.\nContains both body-level fields and sublist (line-item) fields.\n","properties":{"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the source record to extract the value from.\nUse dot notation for nested fields (e.g., \"address.city\").\n"},"generate":{"type":"string","description":"The NetSuite field ID to write the value to (e.g., \"companyname\", \"email\", \"subsidiary\").\n"},"hardCodedValue":{"type":"string","description":"A static value to always use instead of extracting from source data.\nMutually exclusive with extract.\n"},"lookupName":{"type":"string","description":"Reference to a lookup defined in netsuite_da.lookups by name."},"dataType":{"type":"string","description":"Data type hint for the field value (e.g., \"string\", \"number\", \"date\", \"boolean\").\n"},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"immutable":{"type":"boolean","description":"When true, this field is only set on record creation, not on updates."},"discardIfEmpty":{"type":"boolean","description":"When true, skip this field mapping if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value (e.g., \"MM/DD/YYYY\", \"ISO8601\").\nUsed to parse date strings from source data.\n"},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value (e.g., \"America/New_York\")."},"subRecordMapping":{"type":"object","description":"Nested mapping for NetSuite subrecord fields (e.g., address, inventory detail)."},"conditional":{"type":"object","description":"Conditional logic for when to apply this field mapping.\n","properties":{"lookupName":{"type":"string","description":"Lookup to evaluate for the condition."},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"],"description":"When to apply this mapping:\n- record_created: only on new records\n- record_updated: only on existing records\n- extract_not_empty: only when extracted value is non-empty\n- lookup_not_empty: only when lookup returns a value\n- lookup_empty: only when lookup returns no value\n- ignore_if_set: skip if field already has a value\n"}}}}},"description":"Body-level field mappings. Each entry maps a source field to a NetSuite body field.\n"},"lists":{"type":"array","items":{"type":"object","properties":{"generate":{"type":"string","description":"The NetSuite sublist ID (e.g., \"item\" for sales order line items,\n\"addressbook\" for address sublists).\n"},"jsonPath":{"type":"string","description":"JSON path in the source data that contains the array of sublist records.\n"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the sublist record to extract the value from."},"generate":{"type":"string","description":"NetSuite sublist field ID to write to."},"hardCodedValue":{"type":"string","description":"Static value for this sublist field."},"lookupName":{"type":"string","description":"Reference to a lookup by name."},"dataType":{"type":"string","description":"Data type hint for the field value."},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"isKey":{"type":"boolean","description":"Whether this field is a key field for matching existing sublist lines."},"immutable":{"type":"boolean","description":"Only set on record creation, not updates."},"discardIfEmpty":{"type":"boolean","description":"Skip if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value."},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value."},"subRecordMapping":{"type":"object","description":"Nested mapping for subrecord fields within the sublist."},"conditional":{"type":"object","properties":{"lookupName":{"type":"string"},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"]}}}}},"description":"Field mappings for each column in the sublist."}}},"description":"Sublist (line-item) mappings. Each entry maps source data to a NetSuite sublist.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique name for this lookup, referenced by lookupName in field mappings.\n"},"recordType":{"type":"string","description":"NetSuite record type to search (e.g., \"customer\", \"item\").\n"},"searchField":{"type":"string","description":"NetSuite field to search against (e.g., \"email\", \"externalId\", \"name\").\n"},"resultField":{"type":"string","description":"Field from the lookup result to return (e.g., \"internalId\").\n"},"expression":{"type":"string","description":"NetSuite search expression for complex lookup conditions.\n"},"operator":{"type":"string","description":"Comparison operator (e.g., \"is\", \"contains\", \"startswith\").\n"},"includeInactive":{"type":"boolean","description":"Whether to include inactive records in lookup results."},"useDefaultOnMultipleMatches":{"type":"boolean","description":"When true, use the default value if multiple records match the lookup."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results (key-value pairs)."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup definitions used to resolve reference values from NetSuite.\nReferenced by name from field mappings via the lookupName property.\n"},"rawOverride":{"type":"object","description":"Raw override object for advanced use cases. When useRawOverride is true,\nthis object is sent directly to the NetSuite API, bypassing normal mapping.\n"},"useRawOverride":{"type":"boolean","description":"When true, uses rawOverride instead of the normal mapping configuration.\n"},"isMigrated":{"type":"boolean","description":"System flag indicating whether this import was migrated from a legacy format.\n"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"When true, silently skip read-only fields instead of raising errors."},"warningAsError":{"type":"boolean","description":"When true, treat NetSuite warnings as errors that stop the import."},"skipCustomMetadataRequests":{"type":"boolean","description":"When true, skip fetching custom field metadata to improve performance."}},"description":"Import behavior preferences that control how NetSuite handles the import operation.\n"},"retryUpdateAsAdd":{"type":"boolean","description":"When true, if an update fails because the record doesn't exist, automatically\nretry as an add operation. Useful for initial syncs where records may not exist yet.\n"},"isFileProvider":{"type":"boolean","description":"Whether this import handles files in the NetSuite File Cabinet.\n"},"file":{"type":"object","description":"File cabinet configuration for file-based imports into NetSuite.\n","properties":{"name":{"type":"string","description":"Filename for the file in NetSuite File Cabinet."},"fileType":{"type":"string","description":"NetSuite file type (e.g., \"PDF\", \"CSV\", \"PLAINTEXT\", \"EXCEL\", \"XML\").\n"},"folder":{"type":"string","description":"Folder path or name in the NetSuite File Cabinet."},"folderInternalId":{"type":"string","description":"Internal ID of the target folder in NetSuite File Cabinet."},"internalId":{"type":"string","description":"Internal ID of an existing file to update."},"backupFolderInternalId":{"type":"string","description":"Internal ID of a backup folder for file versioning."}}},"customFieldMetadata":{"type":"object","description":"Metadata about custom fields on the target record type.\nPopulated by the system from NetSuite metadata — do not set manually.\n"}}},"Rdbms":{"type":"object","description":"Configuration for RDBMS import operations. Used for database imports into SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, and other relational databases.\n\n**Query type determines which fields are required**\n\n| queryType          | Required fields                | Do NOT set        |\n|--------------------|--------------------------------|-------------------|\n| [\"per_record\"]     | query (array of SQL strings)   | bulkInsert        |\n| [\"per_page\"]       | query (array of SQL strings)   | bulkInsert        |\n| [\"bulk_insert\"]    | bulkInsert object              | query             |\n| [\"bulk_load\"]      | bulkLoad object                | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"MERGE INTO target USING ...\"]\nWrong: \"MERGE INTO target ...\"\nWrong: [{\"query\": \"MERGE INTO target ...\"}]\n","properties":{"lookups":{"type":["string","null"],"description":"A collection of predefined reference data sets or key-value pairs designed to standardize, validate, and streamline input values across the application. These lookup entries serve as a centralized repository of commonly used values, enabling consistent data entry, minimizing errors, and facilitating efficient data retrieval and processing. They are essential for populating user interface elements such as dropdown menus, autocomplete fields, and for enforcing validation rules within business logic. Lookup data can be static or dynamically updated, and may support localization to accommodate diverse user bases. Additionally, lookups often include metadata such as descriptions, effective dates, and status indicators to provide context and support lifecycle management. They play a critical role in maintaining data integrity, enhancing user experience, and ensuring interoperability across different system modules and external integrations.\n\n**Field behavior**\n- Contains structured sets of reference data including codes, labels, enumerations, or mappings.\n- Drives UI components by providing selectable options and autocomplete suggestions.\n- Ensures data consistency by standardizing input values across different modules.\n- Supports both static and dynamic updates to reflect changes in business requirements.\n- May include metadata like descriptions, effective dates, or status indicators for enhanced context.\n- Facilitates localization and internationalization to support multiple languages and regions.\n- Enables validation logic by restricting inputs to predefined acceptable values.\n- Supports versioning to track changes and maintain historical data integrity.\n\n**Implementation guidance**\n- Organize lookup data as dictionaries, arrays, or database tables for efficient access and management.\n- Implement caching strategies to optimize performance and reduce redundant data retrieval.\n- Enforce uniqueness and relevance of lookup entries within their specific contexts.\n- Provide robust mechanisms for updating, versioning, and extending lookup data without disrupting system stability.\n- Incorporate localization and internationalization support for user-facing lookup values.\n- Secure sensitive lookup data with appropriate access controls and auditing.\n- Design APIs or interfaces for easy retrieval and management of lookup data.\n- Ensure synchronization of lookup data across distributed systems or microservices.\n\n**Examples**\n- Country codes and names: {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n- Status codes representing entity states: {\"1\": \"Active\", \"2\": \"Inactive\", \"3\": \"Pending\"}\n- Product categories for e-commerce platforms: {\"ELEC\": \"Electronics\", \"FASH\": \"Fashion\", \"HOME\": \"Home & Garden\"}\n- Payment methods: {\"CC\": \"Credit Card\", \"PP\": \"PayPal\", \"BT\":"},"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"], [\"per_page\"], or [\"first_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars to inject values from incoming records. RDBMS queries REQUIRE the `record.` prefix and triple braces `{{{ }}}`.\n\n**Format — array of strings**\nThis field is an ARRAY OF STRINGS, not a single string and not an array of objects.\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n- WRONG: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]  (missing record. prefix — will produce empty values)\n\n**Critical:** RDBMS HANDLEBARS SYNTAX\n- MUST use `record.` prefix: `{{{record.fieldName}}}`, NOT `{{{fieldName}}}`\n- MUST use triple braces `{{{ }}}` for value references — in RDBMS context, `{{record.field}}` outputs `'value'` (wrapped in single quotes), while `{{{record.field}}}` outputs `value` (raw). Triple braces give you control over quoting in your SQL.\n- Block helpers (`{{#each}}`, `{{#if}}`, `{{/each}}`) use double braces as normal.\n- Nested fields: `{{{record.properties.email}}}`\n\n**Examples**\n- INSERT: [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{{record.quantity}}} WHERE sku = '{{{record.sku}}}'\"]\n- UPSERT (MySQL): [\"INSERT INTO users (id, name) VALUES ({{{record.id}}}, '{{{record.name}}}') ON DUPLICATE KEY UPDATE name = '{{{record.name}}}'\"]\n- MERGE (Snowflake): [\"MERGE INTO target USING (SELECT '{{{record.email}}}' AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = '{{{record.name}}}' WHEN NOT MATCHED THEN INSERT (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{{record.name}}}'\n- Numbers: no quotes — {{{record.quantity}}}\n"},"queryType":{"type":["array","null"],"items":{"type":"string","enum":["INSERT","UPDATE","bulk_insert","per_record","per_page","first_page","bulk_load"]},"description":"Specifies the execution strategy for the SQL operation.\n\n**CRITICAL: This field DETERMINES whether `query` or `bulkInsert` is required.**\n\n**Preference order (use the highest-performance option the operation supports)**\n\n1. **`[\"bulk_load\"]`** — fastest. Stages data as a file then loads via COPY/bulk mechanism. Supports upsert via `primaryKeys`, and custom merge/ignore logic via `overrideMergeQuery`. **Currently Snowflake and NSAW.**\n2. **`[\"bulk_insert\"]`** — batch INSERT via multi-row VALUES. No upsert/ignore logic. All RDBMS types.\n3. **`[\"per_page\"]`** — one SQL statement per page of records. All RDBMS types.\n4. **`[\"per_record\"]`** — one SQL statement per record. Full control over individual record SQL. All RDBMS types.\n\nAlways prefer the highest option that satisfies the operation. `bulk_load` handles INSERT, UPSERT, and custom merge logic. `bulk_insert` is best for pure INSERTs when `bulk_load` isn't available. `per_page` handles custom SQL at batch level. `per_record` gives full control over individual record SQL.\n\n**Decision tree (Follow in order)**\n\n**STEP 1: Does the database support bulk_load?**\n- If Snowflake, NSAW, or another database with bulk_load support:\n  → **USE `[\"bulk_load\"]`** with:\n    - `bulkLoad.tableName` (required)\n    - `bulkLoad.primaryKeys` for upsert/merge (auto-generates MERGE)\n    - `bulkLoad.overrideMergeQuery: true` for custom SQL (ignore-existing, conditional updates, multi-table ops). Override SQL references `{{import.rdbms.bulkLoad.preMergeTemporaryTable}}` for the staging table.\n    - No `primaryKeys` for pure INSERT (auto-generates INSERT)\n\n**STEP 2: Database does NOT support bulk_load**\n- **Pure INSERT**: → **USE `[\"bulk_insert\"]`** (requires `bulkInsert` object)\n- **UPDATE / UPSERT / custom matching logic**: → **USE `[\"per_page\"]`** (requires `query` field). Only use `[\"per_record\"]` if the logic truly cannot be expressed as a batch operation.\n\n**Critical relationship to other fields**\n\n| queryType        | REQUIRES         | DO NOT SET       |\n|------------------|------------------|------------------|\n| `[\"per_record\"]` | `query` field    | `bulkInsert`     |\n| `[\"bulk_insert\"]`| `bulkInsert` obj | `query`          |\n\n**Do not use**\n- `[\"INSERT\"]` or `[\"UPDATE\"]` as standalone values\n- Multiple types combined\n\n**Examples**\n\n**Insert with ignoreExisting (MUST use per_record)**\nPrompt: \"Import customers, ignore existing based on email\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n```\n\n**Pure insert (no duplicate checking)**\nPrompt: \"Insert all records into users table\"\n```json\n\"queryType\": [\"bulk_insert\"],\n\"bulkInsert\": { ... }\n```\n\n**Update Operation**\nPrompt: \"Update inventory counts\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"UPDATE inventory SET qty = {{{record.qty}}} WHERE sku = '{{{record.sku}}}'\"]\n```\n\n**Run Once Per Page**\nPrompt: \"Execute this stored procedure for each page of results\"\n```json\n\"queryType\": [\"per_page\"]\n```\n**per_page uses a different Handlebars context than per_record.** The context is the entire batch (`batch_of_records`), not a single `record.` object. Loop through with `{{#each batch_of_records}}...{{{record.fieldName}}}...{{/each}}`.\n\n**Run Once at Start**\nPrompt: \"Run this cleanup script before processing\"\n```json\n\"queryType\": [\"first_page\"]\n```"},"bulkInsert":{"type":"object","description":"**CRITICAL: This object is ONLY used when `queryType` is `[\"bulk_insert\"]`.**\n\n✅ **SET THIS** when:\n- `queryType` is `[\"bulk_insert\"]`\n- Pure INSERT operation with NO ignoreExisting/duplicate checking\n\n❌ **DO NOT SET THIS** when:\n- `queryType` is `[\"per_record\"]` (use `query` field instead)\n- Operation involves UPDATE, UPSERT, or ignoreExisting logic\n\nIf the prompt mentions \"ignore existing\", \"skip duplicates\", or \"match on field\",\nuse `queryType: [\"per_record\"]` with the `query` field instead of this object.","properties":{"tableName":{"type":"string","description":"The name of the database table into which the bulk insert operation will be executed. This value must correspond to a valid, existing table within the target relational database management system (RDBMS). It serves as the primary destination for inserting multiple rows of data efficiently in a single operation. The table name can include schema or namespace qualifiers if supported by the database (e.g., \"schemaName.tableName\"), allowing precise targeting within complex database structures. Proper validation and sanitization of this value are essential to ensure the operation's success and to prevent SQL injection or other security vulnerabilities.\n\n**Field behavior**\n- Specifies the exact destination table for the bulk insert operation.\n- Must correspond to an existing table in the database schema.\n- Case sensitivity and naming conventions depend on the underlying RDBMS.\n- Supports schema-qualified names where applicable.\n- Used directly in the SQL `INSERT INTO` statement.\n- Influences how data is mapped and inserted during the bulk operation.\n\n**Implementation guidance**\n- Verify the existence and accessibility of the table in the target database before execution.\n- Ensure compliance with the RDBMS naming rules, including reserved keywords, allowed characters, and maximum length.\n- Support and correctly handle schema-qualified table names, respecting database-specific syntax.\n- Sanitize and validate input rigorously to prevent SQL injection and other security risks.\n- Apply appropriate quoting or escaping mechanisms based on the RDBMS (e.g., backticks for MySQL, double quotes for PostgreSQL).\n- Consider the impact of case sensitivity, especially when dealing with quoted identifiers.\n\n**Examples**\n- \"users\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"sales_data_2024\"\n- \"dbo.CustomerRecords\"\n\n**Important notes**\n- The target table must exist and have the appropriate schema and permissions to accept bulk inserts.\n- Incorrect, misspelled, or non-existent table names will cause the operation to fail.\n- Case sensitivity varies by database; for example, PostgreSQL treats quoted identifiers as case-sensitive.\n- Avoid using unvalidated dynamic input to mitigate security vulnerabilities.\n- Schema qualifiers should be used consistently to avoid ambiguity in multi-schema environments.\n\n**Dependency chain**\n- Relies on the database connection and authentication configuration.\n- Interacts with other bulkInsert parameters such as column mappings and data payload.\n- May be influenced by transaction management, locking mechanisms, and database constraints during the insert.\n- Dependent on the database's metadata for validation and existence checks.\n\n**Techn**"},"batchSize":{"type":"string","description":"The number of records to be inserted into the database in a single batch during a bulk insert operation. This parameter is crucial for optimizing the performance and efficiency of bulk data loading by controlling how many records are grouped together before being sent to the database. Proper tuning of batchSize balances memory consumption, transaction overhead, and throughput, enabling the system to handle large volumes of data efficiently without overwhelming resources or causing timeouts. Adjusting batchSize directly impacts transaction size, network utilization, error handling granularity, and recovery strategies, making it essential to tailor this value based on the specific database capabilities, system resources, and workload characteristics.\n\n**Field behavior**\n- Specifies the exact count of records processed and committed in a single batch during bulk insert operations.\n- Determines the frequency and size of database transactions, influencing overall throughput, latency, and system responsiveness.\n- Larger batch sizes can improve throughput but may increase memory usage, transaction duration, and risk of timeouts or locks.\n- Smaller batch sizes reduce memory footprint and transaction time but may increase the total number of transactions and associated overhead.\n- Defines the scope of error handling, as failures typically affect only the current batch, allowing for partial retries or rollbacks.\n- Controls the granularity of commit points, impacting rollback, recovery, and consistency strategies in case of failures.\n\n**Implementation guidance**\n- Choose a batch size that respects the database’s transaction limits, available system memory, and network conditions.\n- Conduct benchmarking and load testing with different batch sizes to identify the optimal balance for your environment.\n- Monitor system performance continuously and adjust batchSize dynamically if supported, to adapt to varying workloads.\n- Implement comprehensive error handling to manage partial batch failures, including retry logic or compensating transactions.\n- Verify compatibility with database drivers, ORM frameworks, or middleware, which may impose constraints or optimizations on batch sizes.\n- Consider the size, complexity, and serialization overhead of individual records, as larger or more complex records may necessitate smaller batches.\n- Factor in network latency and bandwidth to optimize data transfer efficiency and reduce potential bottlenecks.\n\n**Examples**\n- 1000: Suitable for moderate bulk insert operations, balancing speed and resource consumption effectively.\n- 50000: Ideal for high-throughput environments with ample memory and finely tuned database configurations.\n- 100: Appropriate for systems with limited memory or where minimizing transaction size and duration is critical.\n- 5000: A common default batch size providing a good compromise between performance and resource usage.\n\n**Important**"}}},"bulkLoad":{"type":"object","properties":{"tableName":{"type":"string","description":"The name of the target table within the relational database management system (RDBMS) where the bulk load operation will insert, update, or merge data. This property specifies the exact destination table for the bulk data operation and must correspond to a valid, existing table in the database schema. It can include schema qualifiers if supported by the database (e.g., schema.tableName), and must adhere to the naming conventions and case sensitivity rules of the target RDBMS. Proper specification of this property is critical to ensure data is loaded into the correct location without errors or unintended data modification. Accurate definition of the table name helps maintain data integrity and prevents operational failures during bulk load processes.\n\n**Field behavior**\n- Identifies the specific table in the database to receive the bulk-loaded data.\n- Must reference an existing table accessible with the current database credentials.\n- Supports schema-qualified names where applicable (e.g., schema.tableName).\n- Used as the target for insert, update, or merge operations during bulk load.\n- Case sensitivity and naming conventions depend on the underlying database system.\n- Determines the scope and context of the bulk load operation within the database.\n- Influences transactional behavior, locking, and concurrency controls during the operation.\n\n**Implementation guidance**\n- Verify the target table exists and the executing user has sufficient permissions for data modification.\n- Ensure the table name respects the case sensitivity and naming rules of the RDBMS.\n- Avoid reserved keywords or special characters unless properly escaped or quoted according to database requirements.\n- Support fully qualified names including schema or database prefixes if required to avoid ambiguity.\n- Validate compatibility between the table schema and the incoming data to prevent type mismatches or constraint violations.\n- Consider transactional behavior, locking, and concurrency implications on the target table during bulk load.\n- Test the bulk load operation in a development or staging environment to confirm correct table targeting.\n- Implement error handling to catch and report issues related to invalid or inaccessible table names.\n\n**Examples**\n- \"customers\"\n- \"sales_data_2024\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"dbo.EmployeeRecords\"\n\n**Important notes**\n- Incorrect or misspelled table names will cause the bulk load operation to fail.\n- The target table schema must be compatible with the incoming data structure to avoid errors.\n- Bulk loading may overwrite, append, or merge data depending on the load mode; ensure this aligns with business requirements.\n- Some databases require quoting or escaping of table names, especially if they"},"primaryKeys":{"type":["string","null"],"description":"primaryKeys specifies the list of column names that uniquely identify each record in the target relational database table during a bulk load operation. These keys are essential for maintaining data integrity by enforcing uniqueness constraints and enabling precise identification of rows for operations such as inserts, updates, upserts, or conflict resolution. Properly defining primaryKeys ensures that duplicate records are detected and handled appropriately, preventing data inconsistencies and supporting efficient data merging processes. This property supports both single-column and composite primary keys, requiring the exact column names as defined in the target schema, and plays a critical role in guiding the bulk load mechanism to correctly match and manipulate records based on their unique identifiers.  \n**Field behavior:**  \n- Defines one or more columns that collectively serve as the unique identifier for each row in the table.  \n- Enforces uniqueness constraints during bulk load operations to prevent duplicate entries.  \n- Facilitates detection and resolution of conflicts or duplicates during data insertion or updating.  \n- Influences the behavior of upsert, merge, or conflict resolution mechanisms in the bulk load process.  \n- Ensures that each record can be reliably matched and updated based on the specified keys.  \n- Supports both single-column and composite keys, maintaining the order of columns as per the database schema.  \n**Implementation guidance:**  \n- Include all columns that form the complete primary key, especially for composite keys, maintaining the correct order as defined in the database schema.  \n- Ensure column names exactly match those in the target database, respecting case sensitivity where applicable.  \n- Validate that all specified primary key columns exist in both the source data and the target table schema before initiating the bulk load.  \n- Avoid leaving the primaryKeys list empty when uniqueness enforcement or conflict resolution is required.  \n- Consider the immutability and stability of primary key columns to prevent inconsistencies during repeated load operations.  \n- Confirm that the primary key columns are indexed or constrained appropriately in the target database to optimize performance.  \n**Examples:**  \n- [\"id\"] — single-column primary key.  \n- [\"order_id\", \"product_id\"] — composite primary key consisting of two columns.  \n- [\"user_id\", \"timestamp\"] — composite key combining user identifier and timestamp for uniqueness.  \n- [\"customer_id\", \"account_number\", \"region\"] — multi-column composite primary key.  \n**Important notes:**  \n- Omitting primaryKeys when required can result in data duplication, failed loads, or incorrect conflict handling.  \n- The order of"},"overrideMergeQuery":{"type":"boolean","description":"A custom SQL query string that fully overrides the default merge operation executed during bulk load processes in a relational database management system (RDBMS). This property empowers users to define precise, tailored merge logic that governs how records are inserted, updated, or deleted in the target database table when handling large-scale data operations. By specifying this query, users can implement complex matching conditions, custom conflict resolution strategies, additional filtering criteria, or conditional logic that surpasses the system-generated default behavior, ensuring the bulk load aligns perfectly with specific business rules, data integrity requirements, or performance optimizations.\n\n**Field behavior**\n- When provided, this query completely replaces the standard merge statement used in bulk load operations.\n- Supports detailed customization of merge logic, including custom join conditions, conditional updates, selective inserts, and optional deletes.\n- If omitted, the system automatically generates and executes a default merge query based on the schema, keys, and data mappings.\n- The query must be syntactically compatible with the target RDBMS and support any required parameterization for dynamic data injection.\n- Executed within the transactional context of the bulk load to maintain atomicity, consistency, and rollback capabilities in case of failure.\n- The system expects the query to handle all necessary merge scenarios to avoid partial or inconsistent data states.\n- Overrides apply globally for the bulk load operation, affecting all records processed in that batch.\n\n**Implementation guidance**\n- Validate the custom SQL syntax thoroughly before execution to prevent runtime errors and ensure compatibility with the target RDBMS.\n- Ensure the query comprehensively addresses all required merge operations—insert, update, and optionally delete—to maintain data integrity.\n- Support parameter placeholders or bind variables if the system injects dynamic values during execution, and document their usage clearly.\n- Provide users with clear documentation, templates, or examples outlining the expected query structure, required clauses, and best practices.\n- Implement robust safeguards against SQL injection and other security vulnerabilities when accepting and executing custom queries.\n- Test the custom query extensively in a controlled staging environment to verify correctness, performance, and side effects before deploying to production.\n- Consider transaction isolation levels and locking behavior to avoid deadlocks or contention during bulk load operations.\n- Encourage users to include comprehensive error handling and logging within the query or surrounding execution context to facilitate troubleshooting.\n\n**Examples**\n- A MERGE statement that updates existing records based on a composite primary key and inserts new records when no match is found.\n- A merge query incorporating additional WHERE clauses to exclude certain"}},"description":"Specifies whether bulk loading is enabled for relational database management system (RDBMS) operations, allowing for the efficient insertion of large volumes of data through a single, optimized operation. Enabling bulkLoad significantly improves performance during data import, migration, or batch processing by minimizing the overhead associated with individual row inserts and leveraging database-specific bulk insert mechanisms. This feature is particularly beneficial for initial data loads, large-scale migrations, or periodic batch updates where speed, resource efficiency, and reduced transaction time are critical. Bulk loading may temporarily alter database behavior—such as disabling indexes, constraints, or triggers—to maximize throughput, and often requires elevated permissions and careful management of transactional integrity to ensure data consistency. Proper use of bulkLoad can lead to substantial reductions in processing time and system resource consumption during large data operations.\n\n**Field behavior**\n- When set to true, the system uses optimized bulk loading techniques to insert data in large batches, greatly enhancing throughput and efficiency.\n- When false or omitted, data insertion defaults to standard row-by-row operations, which may be slower and consume more resources.\n- Typically enabled during scenarios involving initial data population, large batch imports, or data migration processes.\n- May temporarily disable or defer enforcement of indexes, constraints, and triggers to improve performance during the bulk load operation.\n- Can affect database locking and concurrency, potentially locking tables or partitions for the duration of the bulk load.\n- Bulk loading operations may bypass certain transactional controls, affecting rollback and error recovery behavior.\n\n**Implementation guidance**\n- Verify that the target RDBMS supports bulk loading and understand its specific syntax, capabilities, and limitations.\n- Assess transaction management implications, as bulk loading may alter or bypass triggers, constraints, and rollback mechanisms.\n- Implement robust error handling and post-load validation to ensure data integrity and consistency after bulk operations.\n- Monitor system resources such as CPU, memory, and I/O throughput during bulk load to avoid performance bottlenecks or outages.\n- Plan for potential impacts on database availability, locking behavior, and concurrent access during bulk load execution.\n- Ensure data is properly staged and preprocessed to meet the format and requirements of the bulk loading mechanism.\n- Coordinate bulk loading with maintenance windows or low-traffic periods to minimize disruption.\n\n**Examples**\n- `bulkLoad: true` — Enables bulk loading to accelerate insertion of large datasets efficiently.\n- `bulkLoad: false` — Disables bulk loading, performing inserts using standard row-by-row methods.\n- `bulkLoad` omitted — Defaults"},"updateLookupName":{"type":"string","description":"Specifies the exact name of the lookup table or entity within the relational database management system (RDBMS) that is targeted for update operations. This property serves as a precise identifier to determine which lookup data set should be modified, ensuring accurate and efficient targeting within the database schema. It typically corresponds to a physical table name or a logical entity name defined in the database and must strictly align with the existing schema to enable successful updates without errors or unintended side effects. Proper use of this property is critical for maintaining data integrity during update operations on lookup data, as it directly influences which records are affected. Accurate specification helps prevent accidental data corruption and supports clear, maintainable database interactions.\n\n**Field behavior**\n- Defines the specific lookup table or entity to be updated during an operation.\n- Acts as a key identifier for the update process to locate the correct data set.\n- Must be provided when performing update operations on lookup data.\n- Influences the scope and effect of the update by specifying the target entity.\n- Changes to this value directly affect which data is modified in the database.\n- Invalid or missing values will cause update operations to fail or target unintended data.\n\n**Implementation guidance**\n- Ensure the value exactly matches the existing lookup table or entity name in the database schema.\n- Validate against the RDBMS naming conventions, including allowed characters, length restrictions, and case sensitivity.\n- Maintain consistent naming conventions across the application to prevent ambiguity and errors.\n- Account for case sensitivity based on the underlying database system’s configuration and collation settings.\n- Avoid using reserved keywords, special characters, or whitespace that may cause conflicts or syntax errors.\n- Implement validation checks to catch misspellings or invalid names before executing update operations.\n- Incorporate error handling to manage cases where the specified lookup name does not exist or is inaccessible.\n\n**Examples**\n- \"country_codes\"\n- \"user_roles\"\n- \"product_categories\"\n- \"status_lookup\"\n- \"department_list\"\n\n**Important notes**\n- Providing an incorrect or misspelled name will result in failed update operations or runtime errors.\n- This field must not be left empty when an update operation is intended.\n- It is only relevant and required when performing update operations on lookup tables or entities.\n- Changes to this value should be carefully managed to avoid unintended data modifications or corruption.\n- Ensure appropriate permissions exist for updating the specified lookup entity to prevent authorization failures.\n- Consistent use of this property across environments (development, staging, production) is essential"},"updateExtract":{"type":"string","description":"Specifies the comprehensive configuration and parameters for updating an existing data extract within a relational database management system (RDBMS). This property defines how the extract's data is modified, supporting various update strategies such as incremental updates, full refreshes, conditional changes, or merges. It ensures that the extract remains accurate, consistent, and aligned with business rules and data integrity requirements by controlling the scope, method, and transactional behavior of update operations. Additionally, it allows for fine-grained control through filters, timestamps, and criteria to selectively update portions of the extract, while managing concurrency and maintaining data consistency throughout the process.\n\n**Field behavior**\n- Determines the update strategy applied to an existing data extract, including incremental, full refresh, conditional, or merge operations.\n- Controls how new data interacts with existing extract data—whether by overwriting, appending, or merging.\n- Supports selective updates using filters, timestamps, or conditional criteria to target specific subsets of data.\n- Manages transactional integrity to ensure updates are atomic, consistent, isolated, and durable (ACID-compliant).\n- Coordinates update execution to prevent conflicts, data corruption, or partial updates.\n- Enables configuration of error handling, logging, and rollback mechanisms during update processes.\n- Handles concurrency control to avoid race conditions and ensure data consistency in multi-user environments.\n- Allows scheduling and triggering of update operations based on time, events, or external signals.\n\n**Implementation guidance**\n- Ensure update configurations comply with organizational data governance, security, and integrity policies.\n- Validate all input parameters and update conditions to avoid inconsistent or partial data modifications.\n- Implement robust transactional support to allow rollback on failure and maintain data consistency.\n- Incorporate detailed logging and error reporting to facilitate monitoring, auditing, and troubleshooting.\n- Optimize update methods based on data volume, change frequency, and performance requirements, balancing between incremental and full refresh approaches.\n- Tailor update logic to leverage specific capabilities and constraints of the target RDBMS, including locking and concurrency controls.\n- Coordinate update timing and execution with downstream systems and data consumers to minimize disruption.\n- Design update processes to be idempotent where possible to support safe retries and recovery.\n- Consider the impact of update latency on data freshness and downstream analytics.\n\n**Examples**\n- Configuring an incremental update that uses a last-modified timestamp column to append only new or changed records to the extract.\n- Defining a full refresh update that completely replaces the existing extract data with a newly extracted dataset.\n- Setting up"},"ignoreLookupName":{"type":"string","description":"Specifies whether the lookup name should be ignored during relational database management system (RDBMS) operations, such as query generation, data retrieval, or relationship resolution. When enabled, this flag instructs the system to bypass the use of the lookup name, which can alter how relationships, joins, or references are resolved within database queries. This behavior is particularly useful in scenarios where the lookup name is redundant, introduces unnecessary complexity, or causes performance overhead. Additionally, it allows for alternative identification methods to be prioritized over the lookup name, enabling more flexible or optimized query strategies.\n\n**Field behavior**\n- When set to true, the system excludes the lookup name from all relevant database operations, including query construction, join conditions, and filtering criteria.\n- When false or omitted, the lookup name is actively utilized to resolve references, enforce relationships, and optimize data retrieval.\n- Determines whether explicit lookup names override or supplement default naming conventions, schema-based identifiers, or inferred relationships.\n- Affects how related data is fetched, potentially influencing join strategies, query plans, and lookup optimizations.\n- Impacts the generation of SQL or other query languages by controlling the inclusion of lookup name references.\n\n**Implementation guidance**\n- Enable this flag to enhance query performance by skipping unnecessary lookup name resolution when it is known to be non-essential or redundant.\n- Thoroughly evaluate the impact on data integrity and correctness to ensure that ignoring the lookup name does not result in incomplete, inaccurate, or inconsistent query results.\n- Validate all downstream processes, components, and integrations that depend on lookup names to prevent breaking dependencies or causing data inconsistencies.\n- Consider the underlying database schema design, naming conventions, and relationship mappings before enabling this flag to avoid unintended side effects.\n- Integrate this flag within query builders, ORM layers, data access modules, or middleware to conditionally include or exclude lookup names during query generation and execution.\n- Implement comprehensive testing and monitoring to detect any adverse effects on application behavior or data retrieval accuracy when this flag is toggled.\n\n**Examples**\n- `ignoreLookupName: true` — The system bypasses the lookup name, generating queries without referencing it, which may simplify query logic and improve execution speed.\n- `ignoreLookupName: false` — The lookup name is included in query logic, ensuring that relationships and references are resolved using the defined lookup identifiers.\n- Omission of the property defaults to `false`, meaning lookup names are considered and used unless explicitly ignored.\n\n**Important notes**\n- Ign"},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from the relational database management system (RDBMS) should be entirely skipped. When set to true, the system bypasses the data extraction step, which is particularly useful in scenarios where data extraction is managed externally, has already been completed, or is unnecessary for the current operation. This flag directly controls whether the extraction engine initiates the retrieval of data from the source database, thereby influencing all subsequent stages that depend on the extracted data, such as transformation, validation, and loading. Proper use of this flag ensures flexibility in workflows by allowing integration with external data pipelines or pre-extracted datasets without redundant extraction efforts.\n\n**Field behavior**\n- Determines if the data extraction phase from the RDBMS is executed or omitted.\n- When true, no data is pulled from the source database, effectively skipping extraction.\n- When false or omitted, the extraction process runs normally, retrieving data as configured.\n- Influences downstream processes such as data transformation, validation, and loading that depend on extracted data.\n- Must be explicitly set to true to skip extraction; otherwise, extraction proceeds by default.\n- Impacts the overall data pipeline flow by potentially altering the availability of fresh data.\n\n**Implementation guidance**\n- Default value should be false to ensure extraction occurs unless intentionally overridden.\n- Validate input strictly as a boolean to prevent misconfiguration.\n- Ensure that skipping extraction does not cause failures or data inconsistencies in subsequent pipeline stages.\n- Use this flag in workflows where extraction is handled outside the current system or when working with pre-extracted datasets.\n- Incorporate checks or safeguards to confirm that necessary data is available from alternative sources when extraction is skipped.\n- Log or notify when extraction is skipped to maintain transparency in data processing workflows.\n- Coordinate with other system components to handle scenarios where extraction is bypassed, ensuring smooth pipeline execution.\n\n**Examples**\n- `ignoreExtract: true` — Extraction step is completely bypassed.\n- `ignoreExtract: false` — Extraction step is performed as usual.\n- Field omitted — Defaults to false, so extraction occurs normally.\n- Used in a pipeline where data is pre-loaded from a file or external system, setting `ignoreExtract: true` to avoid redundant extraction.\n\n**Important notes**\n- Setting this flag to true assumes that the system has access to the required data through other means; otherwise, downstream processes may fail or produce incomplete results.\n- Incorrect use can lead to missing data, causing errors or inconsistencies in the data pipeline."}}},"S3":{"type":"object","description":"Configuration for S3 exports","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is physically located, representing the specific geographical area within AWS's global infrastructure that hosts the bucket. This designation directly affects data access latency, availability, redundancy, and compliance with regional data governance, privacy laws, and residency requirements. Selecting the appropriate region is critical for optimizing performance, minimizing costs related to data transfer and storage, and ensuring adherence to legal and organizational policies. The region must be specified using a valid AWS region identifier that accurately corresponds to the bucket's actual location to avoid connectivity issues, authentication failures, and improper routing of API requests. Proper region selection also influences disaster recovery strategies, service availability, and integration with other AWS services, making it a foundational parameter in S3 bucket configuration and management.\n\n**Field behavior**\n- Defines the physical and logical location of the S3 bucket within AWS's global network.\n- Directly impacts data access speed, latency, and throughput.\n- Determines compliance with regional data sovereignty, privacy, and security regulations.\n- Must be a valid AWS region code that matches the bucket’s deployment location.\n- Influences availability zones, fault tolerance, and disaster recovery planning.\n- Affects endpoint URL construction and API request routing.\n- Plays a role in cost optimization by affecting storage pricing and data transfer fees.\n- Governs integration capabilities with other AWS services available in the selected region.\n\n**Implementation guidance**\n- Use official AWS region codes such as \"us-east-1\", \"eu-west-2\", or \"ap-southeast-2\" as defined by AWS.\n- Validate the region value against the current list of supported AWS regions to prevent misconfiguration.\n- Ensure the specified region exactly matches the bucket’s physical location to avoid access and authentication errors.\n- Consider cost implications including storage pricing, data transfer fees, and cross-region replication expenses.\n- When migrating or replicating buckets across regions, update this property accordingly and plan for data transfer and downtime.\n- Verify service availability and feature support in the chosen region before deployment.\n- Regularly review AWS region updates and deprecations to maintain compatibility.\n- Use region-specific endpoints when constructing API requests to ensure proper routing.\n\n**Examples**\n- us-east-1 (Northern Virginia, USA)\n- eu-central-1 (Frankfurt, Germany)\n- ap-southeast-2 (Sydney, Australia)\n- sa-east-1 (São Paulo, Brazil)\n- ca-central-1 (Canada Central)\n\n**Important notes**\n- Incorrect or mismatched region specification"},"bucket":{"type":"string","description":"The name of the Amazon S3 bucket where data or objects are stored or will be stored. This bucket serves as the primary container within the AWS S3 service, uniquely identifying the storage location for your data across all AWS accounts globally. The bucket name must strictly comply with AWS naming conventions, including being DNS-compliant, entirely lowercase, and free of underscores, uppercase letters, or IP address-like formats, ensuring global uniqueness and compatibility. It is essential that the specified bucket already exists and that the user or service has the necessary permissions to access, upload, or modify its contents. Selecting the appropriate bucket, particularly with regard to its AWS region, can significantly impact application performance, cost efficiency, and adherence to data residency and regulatory requirements.\n\n**Field behavior**\n- Specifies the target S3 bucket for all data storage and retrieval operations.\n- Must reference an existing bucket with proper access permissions.\n- Acts as a globally unique identifier within the AWS S3 ecosystem.\n- Immutable after initial assignment; changing the bucket requires specifying a new bucket name.\n- Influences data locality, latency, cost, and compliance based on the bucket’s region.\n\n**Implementation guidance**\n- Validate bucket names against AWS S3 naming rules: lowercase letters, numbers, and hyphens only; no underscores, uppercase letters, or IP address formats.\n- Confirm the bucket’s existence and verify access permissions before performing operations.\n- Consider the bucket’s AWS region to optimize latency, cost, and compliance with data governance policies.\n- Implement comprehensive error handling for scenarios such as bucket not found, access denied, or insufficient permissions.\n- Align bucket selection with organizational policies and regulatory requirements concerning data storage and residency.\n\n**Examples**\n- \"my-app-data-bucket\"\n- \"user-uploads-2024\"\n- \"company-backups-eu-west-1\"\n\n**Important notes**\n- Bucket names are globally unique across all AWS accounts and cannot be reused or renamed once created.\n- Access control is managed through AWS IAM roles, policies, and bucket policies, which must be correctly configured.\n- The bucket’s region selection is critical for optimizing performance, minimizing costs, and ensuring legal compliance.\n- Bucket naming restrictions prohibit uppercase letters, underscores, and IP address-like formats to maintain DNS compliance.\n\n**Dependency chain**\n- Depends on the availability and stability of the AWS S3 service.\n- Requires valid AWS credentials with appropriate permissions to access or modify the bucket.\n- Typically used alongside object keys, region configurations,"},"fileKey":{"type":"string","description":"The unique identifier or path string that specifies the exact location of a file stored within an Amazon S3 bucket. This key acts as the primary reference for locating, accessing, managing, and manipulating a specific file within the storage system. It can be a simple filename or a hierarchical path that mimics folder structures to logically organize files within the bucket. The fileKey is case-sensitive and must be unique within the bucket to prevent conflicts or overwriting. It supports UTF-8 encoded characters and can include delimiters such as slashes (\"/\") to simulate directories, enabling structured file organization. Proper encoding and adherence to AWS naming conventions are essential to ensure seamless integration with S3 APIs and web requests. Additionally, the fileKey plays a critical role in access control, lifecycle policies, and event notifications within the S3 environment.\n\n**Field behavior**\n- Uniquely identifies a file within the S3 bucket namespace.\n- Serves as the primary reference in all file-related operations such as retrieval, update, deletion, and metadata management.\n- Case-sensitive and must be unique to avoid overwriting or conflicts.\n- Supports folder-like delimiters (e.g., slashes) to simulate directory structures for logical organization.\n- Used as a key parameter in all relevant S3 API operations, including GetObject, PutObject, DeleteObject, and CopyObject.\n- Influences access permissions and lifecycle policies when used with prefixes or specific key patterns.\n\n**Implementation guidance**\n- Use a UTF-8 encoded string that accurately reflects the file’s intended path or name.\n- Avoid unsupported or problematic characters; strictly follow S3 key naming conventions to prevent errors.\n- Incorporate logical folder structures using delimiters for better organization and maintainability.\n- Ensure URL encoding when the key is included in web requests or URLs to handle special characters properly.\n- Validate that the key length does not exceed S3’s maximum allowed size (up to 1024 bytes).\n- Consider the impact of key naming on access control policies, lifecycle rules, and event triggers.\n\n**Examples**\n- \"documents/report2024.pdf\"\n- \"images/profile/user123.png\"\n- \"backups/2023/12/backup.zip\"\n- \"videos/2024/events/conference.mp4\"\n- \"logs/2024/06/15/server.log\"\n\n**Important notes**\n- The fileKey is case-sensitive; for example, \"File.txt\" and \"file.txt\" are distinct keys.\n- Changing the fileKey"},"backupBucket":{"type":"string","description":"The name of the Amazon S3 bucket designated for securely storing backup data. This bucket serves as the primary repository for saving copies of critical information, ensuring data durability, integrity, and availability for recovery in the event of data loss, corruption, or disaster. It must be a valid, existing bucket within the AWS environment, configured with appropriate permissions and security settings to facilitate reliable and efficient backup operations. Proper configuration of this bucket—including enabling versioning to preserve historical backup states, applying encryption to protect data at rest, and setting lifecycle policies to manage storage costs and data retention—is essential to optimize backup management, maintain compliance with organizational and regulatory requirements, and control expenses. The bucket should ideally reside in the same AWS region as the backup source to minimize latency and data transfer costs, and may incorporate cross-region replication for enhanced disaster recovery capabilities. Additionally, the bucket should be monitored regularly for access patterns, storage usage, and security compliance to ensure ongoing protection and operational efficiency.\n\n**Field behavior**\n- Specifies the exact S3 bucket where backup data will be written and stored.\n- Must reference a valid, existing bucket within the AWS account and region.\n- Used exclusively during backup processes to save data snapshots or incremental backups.\n- Requires appropriate write permissions to allow backup services to upload data.\n- Typically remains static unless backup storage strategies or requirements change.\n- Supports integration with backup scheduling and monitoring systems.\n- Facilitates data recovery by maintaining backup copies in a secure, durable location.\n\n**Implementation guidance**\n- Verify that the bucket name adheres to AWS S3 naming conventions and is DNS-compliant.\n- Confirm the bucket exists and is accessible by the backup service or user.\n- Set up IAM roles and bucket policies to grant necessary write and read permissions securely.\n- Enable bucket versioning to maintain historical backup versions and support data recovery.\n- Implement lifecycle policies to manage storage costs by archiving or deleting old backups.\n- Apply encryption (e.g., AWS KMS) to protect backup data at rest and meet compliance requirements.\n- Consider enabling cross-region replication for disaster recovery scenarios.\n- Regularly audit bucket permissions and access logs to ensure security compliance.\n- Monitor storage usage and costs to optimize resource allocation and prevent unexpected charges.\n- Align bucket configuration with organizational policies for data retention, security, and compliance.\n\n**Examples**\n- \"my-app-backups\"\n- \"company-data-backup-2024\"\n- \"prod-environment-backup-bucket\"\n- \"s3-backups-region1"},"serverSideEncryptionType":{"type":"string","description":"serverSideEncryptionType specifies the type of server-side encryption applied to objects stored in the S3 bucket, determining how data at rest is securely protected by the storage service. This property defines the encryption algorithm or method used by the server to automatically encrypt data upon storage, ensuring confidentiality, integrity, and compliance with security standards. It accepts predefined values that correspond to supported encryption mechanisms, influencing how data is encrypted, accessed, and managed during storage and retrieval operations. By configuring this property, users can enforce encryption policies that safeguard sensitive information without requiring client-side encryption management, thereby simplifying security administration and enhancing data protection.\n\n**Field behavior**\n- Defines the encryption algorithm or method used by the server to encrypt data at rest.\n- Controls whether server-side encryption is enabled or disabled for stored objects.\n- Accepts specific predefined values corresponding to supported encryption types such as AES256 or aws:kms.\n- Influences data handling during storage and retrieval to maintain data confidentiality and integrity.\n- Applies encryption automatically without requiring client-side encryption management.\n- Determines compliance with organizational and regulatory encryption requirements.\n- Affects how encryption keys are managed, accessed, and audited.\n- Impacts access control and permissions related to encrypted objects.\n- May affect performance and cost depending on the encryption method chosen.\n\n**Implementation guidance**\n- Validate the input value against supported encryption types, primarily \"AES256\" and \"aws:kms\".\n- Ensure the selected encryption type is compatible with the S3 bucket’s configuration, policies, and permissions.\n- When using \"aws:kms\", specify the appropriate AWS KMS key ID if required, and verify key permissions.\n- Gracefully handle scenarios where encryption is not specified, disabled, or set to an empty value.\n- Provide clear and actionable error messages if an unsupported or invalid encryption type is supplied.\n- Consider the impact on performance and cost when enabling encryption, especially with KMS-managed keys.\n- Document encryption settings clearly for users to understand the security posture of stored data.\n- Implement fallback or default behaviors when encryption settings are omitted.\n- Ensure that encryption settings align with audit and compliance reporting requirements.\n- Coordinate with key management policies to maintain secure key lifecycle and rotation.\n- Test encryption settings thoroughly to confirm data is encrypted and accessible as expected.\n\n**Examples**\n- \"AES256\" — Server-side encryption using Amazon S3-managed encryption keys (SSE-S3).\n- \"aws:kms\" — Server-side encryption using AWS Key Management Service-managed keys (SSE-KMS"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Specifies the exact name of the function to be invoked within the wrapper context, enabling dynamic and flexible execution of different functionalities based on the provided value. This property acts as a critical key identifier that directs the wrapper to call a specific function, facilitating modular, reusable, and maintainable code design. The function name must correspond to a valid, accessible, and callable function within the wrapper's scope, ensuring that the intended operation is performed accurately and efficiently. By allowing the function to be selected at runtime, this mechanism supports dynamic behavior, adaptable workflows, and context-sensitive execution paths, enhancing the overall flexibility and scalability of the system.\n\n**Field behavior**\n- Determines which specific function the wrapper will execute during its operation.\n- Accepts a string value representing the exact name of the target function.\n- Must match a valid and accessible function within the wrapper’s execution context.\n- Influences the wrapper’s behavior and output based on the selected function.\n- Supports dynamic assignment to enable flexible and context-dependent function calls.\n- Enables runtime selection of functionality, allowing for versatile and adaptive processing.\n\n**Implementation guidance**\n- Validate that the provided function name exists and is callable within the wrapper environment before invocation.\n- Ensure the function name is supplied as a string and adheres to the naming conventions used in the wrapper’s codebase.\n- Implement robust error handling to gracefully manage cases where the specified function does not exist, is inaccessible, or fails during execution.\n- Sanitize the input to prevent injection attacks, code injection, or other security vulnerabilities.\n- Allow dynamic updates to this property to support runtime changes in function execution without requiring system restarts.\n- Log function invocation attempts and outcomes for auditing and debugging purposes.\n- Consider implementing a whitelist or registry of allowed function names to enhance security and control.\n\n**Examples**\n- \"initialize\"\n- \"processData\"\n- \"renderOutput\"\n- \"cleanupResources\"\n- \"fetchUserDetails\"\n- \"validateInput\"\n- \"exportReport\"\n- \"authenticateUser\"\n\n**Important notes**\n- The function name is case-sensitive and must precisely match the function identifier in the underlying implementation.\n- Misspelled or incorrect function names will cause invocation errors or runtime failures.\n- This property specifies only the function name; any parameters or arguments for the function should be provided separately.\n- The wrapper context must be properly initialized and configured to recognize and execute the specified function.\n- Changes to this property can significantly alter the behavior of the wrapper, so modifications should be managed carefully"},"configuration":{"type":"object","description":"The `configuration` property defines a comprehensive and centralized set of parameters, settings, and operational directives that govern the behavior, functionality, and characteristics of the wrapper component. It acts as the primary container for all customizable options that influence how the wrapper initializes, runs, and adapts to various deployment environments or user requirements. This includes environment-specific configurations, feature toggles, resource limits, integration endpoints, security credentials, logging preferences, and other critical controls. By encapsulating these diverse settings, the configuration property enables fine-grained control over the wrapper’s runtime behavior, supports dynamic adaptability, and facilitates consistent management across different scenarios.\n\n**Field behavior**\n- Encapsulates all relevant configurable options affecting the wrapper’s operation and lifecycle.\n- Supports complex nested structures or key-value pairs to represent hierarchical and interrelated settings.\n- Can be mandatory or optional depending on the wrapper’s design and operational context.\n- Changes to configuration typically influence initialization, runtime behavior, and potentially require component restart or reinitialization.\n- May support dynamic updates if the wrapper is designed for live reconfiguration without downtime.\n- Acts as a single source of truth for operational parameters, ensuring consistency and predictability.\n\n**Implementation guidance**\n- Organize configuration parameters logically, grouping related settings to improve readability and maintainability.\n- Implement robust validation mechanisms to enforce correct data types, value ranges, and adherence to constraints.\n- Provide sensible default values for optional parameters to enhance usability and prevent misconfiguration.\n- Design the configuration schema to be extensible and backward-compatible, allowing seamless future enhancements.\n- Secure sensitive information within the configuration using encryption, secure storage, or exclusion from logs and error messages.\n- Thoroughly document each configuration option, including its purpose, valid values, default behavior, and impact on the wrapper.\n- Consider environment-specific overrides or profiles to facilitate deployment flexibility.\n- Ensure configuration changes are atomic and consistent to avoid partial or invalid states.\n\n**Examples**\n- `{ \"timeout\": 5000, \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"environment\": \"production\", \"apiEndpoint\": \"https://api.example.com\", \"features\": { \"beta\": false } }`\n- Nested objects specifying database connection details, authentication credentials, UI themes, third-party service integrations, or resource limits.\n- Feature flags enabling or disabling experimental capabilities dynamically.\n- Security parameters such as OAuth tokens, encryption keys, or access control lists.\n\n**Important notes**\n- Comprehensive and clear documentation of all"},"lookups":{"type":"object","properties":{"type":{"type":"string","description":"type: >\n  Specifies the category or classification of the lookup within the wrapper context.\n  This property defines the nature or kind of lookup being performed or referenced.\n  **Field behavior**\n  - Determines the specific type of lookup operation or data classification.\n  - Influences how the lookup data is processed or interpreted.\n  - May restrict or enable certain values or options based on the type selected.\n  **Implementation guidance**\n  - Use clear and consistent naming conventions for different types.\n  - Validate the value against a predefined set of allowed types to ensure data integrity.\n  - Ensure that the type aligns with the corresponding lookup logic or data source.\n  **Examples**\n  - \"userRole\"\n  - \"productCategory\"\n  - \"statusCode\"\n  - \"regionCode\"\n  **Important notes**\n  - The type value is critical for correctly resolving and handling lookup data.\n  - Changing the type may affect downstream processing or data retrieval.\n  - Ensure compatibility with other related fields or components that depend on this type.\n  **Dependency chain**\n  - Depends on the wrapper context to provide scope.\n  - Influences the selection and retrieval of lookup values.\n  - May be linked to validation schemas or business logic modules.\n  **Technical details**\n  - Typically represented as a string.\n  - Should conform to a controlled vocabulary or enumeration where applicable.\n  - Case sensitivity may apply depending on implementation."},"_lookupCacheId":{"type":"string","format":"objectId","description":"_lookupCacheId: Identifier used to reference a specific cache entry for lookup operations within the wrapper's lookup mechanism.\n**Field behavior**\n- Serves as a unique identifier for cached lookup data.\n- Used to retrieve or update cached lookup results efficiently.\n- Helps in minimizing redundant lookup operations by reusing cached data.\n- Typically remains constant for the lifespan of the cached data entry.\n**Implementation guidance**\n- Should be generated to ensure uniqueness within the scope of the wrapper's lookups.\n- Must be consistent and stable to correctly reference the intended cache entry.\n- Should be invalidated or updated when the underlying lookup data changes.\n- Can be a string or numeric identifier depending on the caching system design.\n**Examples**\n- \"userRolesCache_v1\"\n- \"productLookupCache_2024_06\"\n- \"regionCodesCache_42\"\n**Important notes**\n- Proper management of _lookupCacheId is critical to avoid stale or incorrect lookup data.\n- Changing the _lookupCacheId without updating the cache content can lead to lookup failures.\n- Should be used in conjunction with cache expiration or invalidation strategies.\n**Dependency chain**\n- Depends on the wrapper.lookups system to manage cached lookup data.\n- May interact with cache storage or memory management components.\n- Influences lookup operation performance and data consistency.\n**Technical details**\n- Typically implemented as a string identifier.\n- May be used as a key in key-value cache stores.\n- Should be designed to avoid collisions with other cache identifiers.\n- May include versioning or timestamp information to manage cache lifecycle."},"extract":{"type":"string","description":"Extract: Specifies the extraction criteria or pattern used to retrieve specific data from a given input within the lookup process.\n**Field behavior**\n- Defines the rules or patterns to identify and extract relevant information from input data.\n- Can be a string, regular expression, or structured query depending on the implementation.\n- Used during the lookup operation to isolate the desired subset of data.\n- May support multiple extraction patterns or conditional extraction logic.\n**Implementation guidance**\n- Ensure the extraction pattern is precise to avoid incorrect or partial data retrieval.\n- Validate the extraction criteria against sample input data to confirm accuracy.\n- Support common pattern formats such as regex for flexible and powerful extraction.\n- Provide clear error handling if the extraction pattern fails or returns no results.\n- Allow for optional extraction parameters to customize behavior (e.g., case sensitivity).\n**Examples**\n- A regex pattern like `\"\\d{4}-\\d{2}-\\d{2}\"` to extract dates in YYYY-MM-DD format.\n- A JSONPath expression such as `$.store.book[*].author` to extract authors from a JSON object.\n- A simple substring like `\"ERROR:\"` to extract error messages from logs.\n- XPath query to extract nodes from XML data.\n**Important notes**\n- The extraction logic directly impacts the accuracy and relevance of the lookup results.\n- Complex extraction patterns may affect performance; optimize where possible.\n- Extraction should handle edge cases such as missing fields or unexpected data formats gracefully.\n- Document supported extraction pattern types and syntax clearly for users.\n**Dependency chain**\n- Depends on the input data format and structure to define appropriate extraction criteria.\n- Works in conjunction with the lookup mechanism that applies the extraction.\n- May interact with validation or transformation steps post-extraction.\n**Technical details**\n- Typically implemented using pattern matching libraries or query language parsers.\n- May require escaping or encoding special characters within extraction patterns.\n- Should support configurable options like multiline matching or case sensitivity.\n- Extraction results are usually returned as strings, arrays, or structured objects depending on context."},"map":{"type":"object","description":"map: >\n  A mapping object that defines key-value pairs used for lookup operations within the wrapper context.\n  **Field behavior**\n  - Serves as a dictionary or associative array for translating or mapping input keys to corresponding output values.\n  - Used to customize or override default lookup values dynamically.\n  - Supports string keys and values, but may also support other data types depending on implementation.\n  **Implementation guidance**\n  - Ensure keys are unique within the map to avoid conflicts.\n  - Validate that values conform to expected formats or types required by the lookup logic.\n  - Consider immutability or controlled updates to prevent unintended side effects during runtime.\n  - Provide clear error handling for missing or invalid keys during lookup operations.\n  **Examples**\n  - {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n  - {\"error404\": \"Not Found\", \"error500\": \"Internal Server Error\"}\n  - {\"env\": \"production\", \"version\": \"1.2.3\"}\n  **Important notes**\n  - The map is context-specific and should be populated according to the needs of the wrapper's lookup functionality.\n  - Large maps may impact performance; optimize size and access patterns accordingly.\n  - Changes to the map may require reinitialization or refresh of dependent components.\n  **Dependency chain**\n  - Depends on the wrapper.lookups context for proper integration.\n  - May be referenced by other properties or methods performing lookup operations.\n  **Technical details**\n  - Typically implemented as a JSON object or dictionary data structure.\n  - Keys and values are usually strings but can be extended to other serializable types.\n  - Should support efficient retrieval, ideally O(1) time complexity for lookups."},"default":{"type":["object","null"],"description":"The default property specifies the fallback or initial value to be used within the wrapper's lookups configuration when no other specific value is provided or applicable. This ensures that the system has a predefined baseline value to operate with, preventing errors or undefined behavior.\n\n**Field behavior**\n- Acts as the fallback value in the lookup process.\n- Used when no other matching lookup value is found.\n- Provides a baseline or initial value for the wrapper configuration.\n- Ensures consistent behavior by avoiding null or undefined states.\n\n**Implementation guidance**\n- Define a sensible and valid default value that aligns with the expected data type and usage context.\n- Ensure the default value is compatible with other dependent fields or processes.\n- Update the default value cautiously, as it impacts the fallback behavior system-wide.\n- Validate the default value during configuration loading to prevent runtime errors.\n\n**Examples**\n- A string value such as \"N/A\" or \"unknown\" for textual lookups.\n- A numeric value like 0 or -1 for numerical lookups.\n- A boolean value such as false when a true/false default is needed.\n- An object or dictionary representing a default configuration set.\n\n**Important notes**\n- The default value should be meaningful and appropriate to avoid misleading results.\n- Overriding the default value may affect downstream logic relying on fallback behavior.\n- Absence of a default value might lead to errors or unexpected behavior in the lookup process.\n- Consider localization or context-specific defaults if applicable.\n\n**Dependency chain**\n- Used by the wrapper.lookups mechanism during value resolution.\n- May influence or be influenced by other lookup-related properties or configurations.\n- Interacts with error handling or fallback logic in the system.\n\n**Technical details**\n- Data type should match the expected type of lookup values (string, number, boolean, object, etc.).\n- Stored and accessed as part of the wrapper's lookup configuration object.\n- Should be immutable during runtime unless explicitly reconfigured.\n- May be serialized/deserialized as part of configuration files or API payloads."},"allowFailures":{"type":"boolean","description":"allowFailures: Indicates whether the system should permit failures during the lookup process without aborting the entire operation.\n**Field behavior**\n- When set to true, individual lookup failures are tolerated, allowing the process to continue.\n- When set to false, any failure in the lookup process causes the entire operation to fail.\n- Helps in scenarios where partial results are acceptable or expected.\n**Implementation guidance**\n- Use this flag to control error handling behavior in lookup operations.\n- Ensure that downstream processes can handle partial or incomplete data if allowFailures is true.\n- Log or track failures when allowFailures is enabled to aid in debugging and monitoring.\n**Examples**\n- allowFailures: true — The system continues processing even if some lookups fail.\n- allowFailures: false — The system stops and reports an error immediately upon a lookup failure.\n**Important notes**\n- Enabling allowFailures may result in incomplete or partial data sets.\n- Use with caution in critical systems where data integrity is paramount.\n- Consider combining with retry mechanisms or fallback strategies.\n**Dependency chain**\n- Depends on the lookup operation within wrapper.lookups.\n- May affect error handling and result aggregation components.\n**Technical details**\n- Boolean value: true or false.\n- Default behavior should be defined explicitly to avoid ambiguity.\n- Should be checked before executing lookup calls to determine error handling flow."},"_id":{"type":"object","description":"Unique identifier for the lookup entry within the wrapper context.\n**Field behavior**\n- Serves as the primary key to uniquely identify each lookup entry.\n- Immutable once assigned to ensure consistent referencing.\n- Used to retrieve, update, or delete specific lookup entries.\n**Implementation guidance**\n- Should be generated using a globally unique identifier (e.g., UUID or ObjectId).\n- Must be indexed in the database for efficient querying.\n- Should be validated to ensure uniqueness within the wrapper.lookups collection.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"a1b2c3d4e5f6789012345678\"\n**Important notes**\n- This field is mandatory for every lookup entry.\n- Changing the _id after creation can lead to data inconsistency.\n- Should not contain sensitive or personally identifiable information.\n**Dependency chain**\n- Used by other fields or services that reference lookup entries.\n- May be linked to foreign keys or references in related collections or tables.\n**Technical details**\n- Typically stored as a string or ObjectId type depending on the database.\n- Must conform to the format and length constraints of the chosen identifier scheme.\n- Should be generated server-side to prevent collisions."},"cLocked":{"type":"object","description":"cLocked indicates whether the item is currently locked, preventing modifications or deletions.\n**Field behavior**\n- Represents the lock status of an item within the system.\n- When set to true, the item is considered locked and cannot be edited or deleted.\n- When set to false, the item is unlocked and available for modifications.\n- Typically used to enforce data integrity and prevent concurrent conflicting changes.\n**Implementation guidance**\n- Should be a boolean value: true or false.\n- Ensure that any operation attempting to modify or delete the item checks the cLocked status first.\n- Update the cLocked status appropriately when locking or unlocking the item.\n- Consider integrating with user permissions to control who can change the lock status.\n**Examples**\n- cLocked: true (The item is locked and cannot be changed.)\n- cLocked: false (The item is unlocked and editable.)\n**Important notes**\n- Locking an item does not necessarily mean it is read-only; it depends on the system's enforcement.\n- The lock status should be clearly communicated to users to avoid confusion.\n- Changes to cLocked should be logged for audit purposes.\n**Dependency chain**\n- May depend on user roles or permissions to set or unset the lock.\n- Other fields or operations may check cLocked before proceeding.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false unless specified otherwise.\n- Stored as part of the lookup or wrapper object in the data model.\n- Should be efficiently queryable to enforce lock checks during operations."}},"description":"A collection of key-value pairs that serve as a centralized reference dictionary to map or translate specific identifiers, codes, or keys into meaningful, human-readable, or contextually relevant values within the wrapper object. This property facilitates data normalization, standardization, and clarity across various components of the application by providing descriptive labels, metadata, or explanatory information associated with each key. It enhances data interpretation, validation, and presentation by enabling consistent and seamless conversion of coded data into understandable formats, supporting both simple and complex hierarchical mappings as needed.\n\n**Field behavior**\n- Functions as a lookup dictionary to resolve codes or keys into descriptive, user-friendly values.\n- Supports consistent data representation and interpretation throughout the application.\n- Commonly accessed during data processing, validation, or UI rendering to enhance readability.\n- May support hierarchical or nested mappings for complex relationships.\n- Typically remains static or changes infrequently but should reflect the latest valid mappings.\n\n**Implementation guidance**\n- Structure as an object, map, or dictionary with unique, well-defined keys and corresponding descriptive values.\n- Ensure keys are consistent, unambiguous, and conform to expected formats or standards.\n- Values should be concise, clear, and informative to facilitate easy understanding by end-users or systems.\n- Support nested or multi-level lookups if the domain requires hierarchical mapping.\n- Validate lookup data integrity regularly to prevent missing, outdated, or incorrect mappings.\n- Consider immutability or concurrency controls if the lookup data is accessed or modified concurrently.\n- Optimize for efficient retrieval, aiming for constant-time (O(1)) lookup performance.\n\n**Examples**\n- Mapping ISO country codes to full country names, e.g., {\"US\": \"United States\", \"FR\": \"France\"}.\n- Translating HTTP status codes to descriptive messages, e.g., {\"200\": \"OK\", \"404\": \"Not Found\"}.\n- Converting product or SKU identifiers to product names within an inventory management system.\n- Mapping error codes to user-friendly error descriptions.\n- Associating role IDs with role names in an access control system.\n\n**Important notes**\n- Keep the lookup collection current to accurately reflect any updates in codes or their meanings.\n- Avoid including sensitive, confidential, or personally identifiable information within lookup values.\n- Monitor and manage the size of the lookup collection to prevent performance degradation.\n- Ensure thread-safety or appropriate synchronization mechanisms if the lookups are mutable and accessed concurrently.\n- Changes to lookup data can have widespread effects on data interpretation and user interface behavior."}}},"Salesforce":{"type":"object","description":"Configuration for Salesforce imports containing operation type, API selection, and object-specific settings.\n\n**IMPORTANT:** This schema defines ONLY the Salesforce-specific configuration properties. Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level.\n\nWhen asked to generate this configuration, return an object with these properties:\n- `operation` (REQUIRED): The Salesforce operation type (insert, update, upsert, etc.)\n- `api` (REQUIRED): The Salesforce API to use (default: \"soap\")\n- `sObjectType`: The Salesforce object type (e.g., \"Account\", \"Contact\", \"Vendor__c\")\n- `idLookup`: Configuration for looking up existing records\n- `upsert`: Configuration for upsert operations (when operation is \"upsert\")\n- And other operation-specific properties as needed","properties":{"lookups":{"type":["string","null"],"description":"A collection of lookup tables or reference data sets that provide standardized, predefined values or options within the application context. These lookup tables enable consistent data entry, validation, and interpretation by defining controlled vocabularies or permissible values for various fields or parameters across the system. They serve as authoritative sources for reference data, ensuring uniformity and reducing errors in data handling and user interactions. Lookup tables typically encompass static or infrequently changing data that supports consistent business logic and user interface behavior across multiple modules or components. By centralizing these reference datasets, the system promotes reusability, simplifies maintenance, and enhances data integrity throughout the application lifecycle.\n\n**Field behavior**\n- Contains multiple named lookup tables, each representing a distinct category or domain of reference data.\n- Used to populate user interface elements such as dropdown menus, autocomplete fields, and selection lists.\n- Supports validation logic by restricting input to predefined permissible values.\n- Typically consists of static or infrequently changing data that underpins consistent data interpretation.\n- May be referenced by multiple components or modules within the application to maintain data consistency.\n- Enables dynamic adaptation of UI and business rules by centralizing reference data management.\n- Facilitates localization by supporting language-specific labels or descriptions where applicable.\n- Often versioned or managed to track changes and ensure backward compatibility.\n\n**Implementation guidance**\n- Structure as a dictionary or map where keys correspond to lookup table names and values are arrays or lists of unique, well-defined entries.\n- Ensure each lookup entry is unique within its table and includes necessary metadata such as codes, descriptions, or display labels.\n- Design for easy updates or extensions to lookup tables without compromising existing data integrity or system stability.\n- Implement caching strategies to optimize performance, especially for client-side consumption.\n- Consider supporting localization and internationalization by including language-specific labels or descriptions where applicable.\n- Incorporate versioning or change management mechanisms to handle updates without disrupting dependent systems.\n- Validate data integrity rigorously during ingestion or updates to prevent duplicates, inconsistencies, or invalid entries.\n- Separate lookup data from business logic to maintain clarity and ease of maintenance.\n- Provide clear documentation for each lookup table to facilitate understanding and correct usage by developers and users.\n- Ensure lookup data is accessible through standardized APIs or interfaces to promote interoperability.\n\n**Examples**\n- A \"countries\" lookup containing standardized country codes (e.g., ISO 3166-1 alpha-2) and their corresponding country names.\n- A \"statusCodes\" lookup defining permissible status values such as \""},"operation":{"type":"string","enum":["insert","update","upsert","upsertpicklistvalues","delete","addupdate"],"description":"Specifies the type of Salesforce operation to perform on records. This determines how data is created, modified, or removed in Salesforce.\n\n**Field behavior**\n- Defines the specific Salesforce action to execute (insert, update, upsert, etc.)\n- Dictates required fields and validation rules for the operation\n- Determines API endpoint and request structure\n- Automatically converted to lowercase before processing\n- Influences response format and error handling\n\n**Operation types**\n\n****insert****\n- **Purpose:** Create new records in Salesforce\n- **Behavior:** Creates new records\n- **Use when:** Creating brand new records\n- **With ignoreExisting: true:** Checks for existing records and skips them (requires idLookup.whereClause)\n- **Without ignoreExisting:** Creates all records without checking for duplicates\n- **Example:** \"Insert new leads from marketing campaign\"\n- **Example with ignoreExisting:** \"Create vendors while ignoring existing vendors\" → operation: \"insert\" + ignoreExisting: true\n\n****update****\n- **Purpose:** Modify existing records in Salesforce\n- **Behavior:** Requires record ID; fails if record doesn't exist\n- **Use when:** Updating known existing records\n- **Example:** \"Update customer status based on order completion\"\n\n****upsert****\n- **Purpose:** Create or update records based on external ID\n- **Behavior:** Updates if external ID matches, creates if not found\n- **Use when:** Syncing data from external systems with external ID field\n- **Example:** \"Upsert customers by External_ID__c field\"\n- **REQUIRES:**\n  - `idLookup.extract` field (maps incoming field to External ID)\n  - `upsert.externalIdField` field (specifies Salesforce External ID field)\n  - Both fields MUST be set for upsert to work\n\n****upsertpicklistvalues****\n- **Purpose:** Create or update picklist values dynamically\n- **Behavior:** Manages picklist value creation/updates\n- **Use when:** Maintaining picklist values from external data\n- **Example:** \"Sync product categories to Salesforce picklist\"\n\n****delete****\n- **Purpose:** Remove records from Salesforce\n- **Behavior:** Requires record ID; records moved to recycle bin\n- **Use when:** Removing obsolete or invalid records\n- **Example:** \"Delete expired opportunities\"\n\n****addupdate****\n- **Purpose:** Create new or update existing records (Celigo-specific)\n- **Behavior:** Similar to upsert but uses different matching logic\n- **Use when:** Upserting without requiring external ID field\n- **Example:** \"Sync contacts, create new or update existing by email\"\n\n**Implementation guidance**\n- Choose operation based on whether records are new, existing, or unknown\n- For **insert**: Ensure records are truly new to avoid errors\n- For **insert with ignoreExisting: true**: MUST set `idLookup.whereClause` to check for existing records\n- For **update/delete**: MUST set `idLookup.whereClause` to identify records\n- For **upsert**: MUST also include the `upsert` object with `externalIdField` property\n- For **upsert**: MUST also set `idLookup.extract` to map incoming field\n- For **addupdate**: MUST set `idLookup.whereClause` to define matching criteria\n- Use **upsertpicklistvalues** only for picklist management scenarios\n\n**Common patterns**\n- **\"Create X while ignoring existing X\"** → operation: \"insert\" + api: \"soap\" + idLookup.whereClause\n- **\"Create or update X\"** → operation: \"upsert\" + api: \"soap\" + upsert.externalIdField + idLookup.extract\n- **\"Update X\"** → operation: \"update\" + api: \"soap\" + idLookup.whereClause\n- **\"Sync X\"** → operation: \"addupdate\" + api: \"soap\" + idLookup.whereClause\n- **ALWAYS include** api field (default to \"soap\" if not specified in prompt)\n\n**Examples**\n\n**Insert new records**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Lead\"\n}\n```\nPrompt: \"Create new leads in Salesforce\"\n\n**Insert with ignore existing**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Vendor__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{VendorName}}'\"\n  }\n}\n```\nPrompt: \"Create vendor records while ignoring existing vendors\"\n\n**NOTE:** For \"ignore existing\" prompts, also remember that `ignoreExisting: true` belongs at a different level (not part of this salesforce configuration).\n\n**Update existing records**\n```json\n{\n  \"operation\": \"update\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Account\",\n  \"idLookup\": {\n    \"whereClause\": \"Id = '{{AccountId}}'\"\n  }\n}\n```\nPrompt: \"Update existing accounts in Salesforce\"\n\n**Upsert by external id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Contact\",\n  \"idLookup\": {\n    \"extract\": \"ExternalId\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nPrompt: \"Create or update contacts by External_ID__c\"\n\n**Addupdate (create or update)**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Customer__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{CustomerName}}'\"\n  }\n}\n```\nPrompt: \"Sync customers, create new or update existing\"\n\n**Important notes**\n- Operation value is case-insensitive (automatically converted to lowercase)\n- Invalid operation values will cause API validation errors\n- Each operation has different requirements for record identification\n- **CRITICAL for upsert**: MUST include the `upsert` object with `externalIdField` property\n- **CRITICAL for upsert**: MUST set `idLookup.extract` to map incoming field to External ID\n- **upsert** requires external ID field to be configured in Salesforce\n- **addupdate** is Celigo-specific and may not work with all Salesforce APIs\n- Operations must align with user's Salesforce permissions\n- Some operations support bulk processing more efficiently than others"},"api":{"type":"string","enum":["soap","rest","metadata","compositerecord"],"description":"**REQUIRED FIELD:** Specifies which Salesforce API to use for the import operation. Different APIs have different capabilities, performance characteristics, and use cases.\n\n**CRITICAL:** This field MUST ALWAYS be set. If no specific API is mentioned in the prompt, **DEFAULT to \"soap\"**.\n\n**Field behavior**\n- **REQUIRED** for all Salesforce imports\n- Determines the protocol and endpoint used for Salesforce communication\n- Influences request format, batch sizes, and response handling\n- Automatically converted to lowercase before processing\n- Affects available features and operation types\n- Impacts performance and error handling behavior\n\n**Api types**\n\n****soap** (Default - Most Common)**\n- **Purpose:** SOAP API with additional features like AllOrNone transaction control\n- **Best for:** Transactional imports requiring rollback capabilities\n- **Features:** AllOrNone header, enhanced transaction control, XML-based\n- **Limits:** Similar to REST but with more overhead\n- **Use when:** Need transaction rollback, legacy integrations\n- **Example:** \"Import invoices with all-or-none transaction guarantee\"\n\n        ****rest****\n- **Purpose:** Standard REST API for single-record and small-batch operations\n- **Best for:** Real-time imports, small datasets (< 200 records/call)\n- **Features:** Full CRUD operations, flexible querying, rich error messages\n- **Limits:** 2,000 records per API call\n- **Use when:** Standard import operations, real-time data sync\n- **Example:** \"Import leads from web form submissions\"\n\n****metadata****\n- **Purpose:** Metadata API for importing configuration and setup data\n- **Best for:** Deploying Salesforce configuration, not data records\n- **Features:** Deploy custom objects, fields, workflows, etc.\n- **Use when:** Deploying Salesforce configuration between orgs\n- **Example:** \"Deploy custom object definitions from non-production to production\"\n- **Note:** NOT for data records - only for metadata\n\n****compositerecord****\n- **Purpose:** Composite Graph API for related record trees\n- **Best for:** Creating multiple related records in one request\n- **Features:** Create parent-child relationships atomically\n- **Use when:** Importing hierarchical data (e.g., Account + Contacts + Opportunities)\n- **Example:** \"Import accounts with their related contacts in one operation\"\n\n**Implementation guidance**\n- **MUST ALWAYS SET THIS FIELD** - Never leave it undefined\n- **Default to \"rest\"** for most import operations (90% of use cases)\n- If prompt doesn't specify API type, use **\"soap\"**\n- Use **\"soap\"** only when AllOrNone transaction control is explicitly required\n- Use **\"metadata\"** only for configuration deployment, never for data\n- Use **\"compositerecord\"** for complex parent-child relationship imports\n- Consider API limits and performance characteristics for your data volume\n- Ensure the Salesforce connection supports the chosen API type\n\n**Default choice**\nWhen in doubt or when no API is mentioned in the prompt → **USE \"soap\"**\n\n**Examples**\n\n**Rest api (most common)**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"rest\",\n  \"sobject\": \"Lead\"\n}\n```\nPrompt: \"Import leads into Salesforce\"\n\n**Soap api with transaction control**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sobject\": \"Invoice__c\",\n  \"soap\": {\n    \"headers\": {\n      \"allOrNone\": true\n    }\n  }\n}\n```\nPrompt: \"Import invoices with all-or-none guarantee\"\n\n**Composite Record api**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"compositerecord\",\n  \"sobject\": \"Account\"\n}\n```\nPrompt: \"Import accounts with related contacts and opportunities\"\n\n**Important notes**\n- API value is case-insensitive (automatically converted to lowercase)\n- **REST API is the default and most commonly used**\n- SOAP API has slightly more overhead but provides transaction guarantees\n- Metadata API is for configuration only, not data records\n- Composite Record API requires careful relationship mapping\n- Each API has different rate limits and performance characteristics\n- Ensure your Salesforce connection and permissions support the chosen API"},"soap":{"type":"object","properties":{"headers":{"type":"object","properties":{"allOrNone":{"type":"boolean","description":"allOrNone specifies whether the Salesforce API operation should be executed atomically, ensuring that either all parts of the operation succeed or none do. When set to true, if any record in a batch operation fails, the entire transaction is rolled back and no changes are committed, preserving data integrity across the dataset. When set to false, the operation allows partial success, meaning some records may be processed and committed even if others fail, which can improve performance but requires careful error handling to manage partial failures. This property is primarily applicable to data manipulation operations such as create, update, delete, and upsert, and is critical for maintaining consistent and reliable data states in multi-record transactions.\n\n**Field behavior**\n- When true, enforces atomicity: all records in the batch must succeed or the entire operation is rolled back with no changes applied.\n- When false, permits partial success: successfully processed records are committed even if some records fail.\n- Applies mainly to batch DML operations including create, update, delete, and upsert.\n- Helps maintain data consistency by preventing partial updates when strict transactional integrity is required.\n- Influences how errors are reported and handled in the API response.\n- Affects transaction commit behavior and rollback mechanisms within the Salesforce platform.\n\n**Implementation guidance**\n- Set to true to guarantee strict data consistency and avoid partial data changes that could lead to inconsistent states.\n- Set to false to allow higher throughput and partial processing when some failures are acceptable or expected.\n- Default value is false if the property is omitted, allowing partial success by default.\n- When false, implement robust error handling to process partial success responses and handle individual record errors appropriately.\n- Verify that the target Salesforce API operation supports the allOrNone header before use.\n- Consider the impact on transaction duration and system resources when enabling atomic operations.\n- Use in scenarios where data integrity is paramount, such as financial or compliance-related data updates.\n\n**Examples**\n- allOrNone = true: Attempting to update 10 records where one record fails validation results in no records being updated, preserving atomicity.\n- allOrNone = false: Attempting to update 10 records where one record fails validation results in 9 records updated successfully and one failure reported.\n- allOrNone = true: Deleting multiple records in a batch will either delete all specified records or none if any deletion fails.\n- allOrNone = false: Creating multiple records where some violate unique constraints results in successful creation of valid records and"}},"description":"Headers to be included in the SOAP request sent to Salesforce, primarily used for authentication, session management, and transmitting additional metadata required by the Salesforce API. These headers form part of the SOAP envelope's header section and are essential for establishing and maintaining a valid session, specifying client options, and enabling various Salesforce API features. They can include standard headers such as SessionHeader for session identification, CallOptions for client-specific settings, and LoginScopeHeader for scoping login requests, as well as custom headers tailored to specific integration needs. Proper configuration of these headers ensures secure and successful communication with Salesforce services, enabling precise control over API interactions and session lifecycle.\n\n**Field behavior**\n- Defines the set of SOAP headers included in every request to the Salesforce API.\n- Facilitates passing of authentication credentials like session IDs or OAuth tokens.\n- Supports inclusion of metadata that controls API behavior or scopes requests.\n- Allows addition of custom headers required by specialized Salesforce integrations.\n- Headers persist across multiple API calls within the same session when set.\n- Modifies request context by influencing how Salesforce processes and authorizes API calls.\n- Enables fine-grained control over API call execution, such as setting query options or client identifiers.\n\n**Implementation guidance**\n- Construct headers as key-value pairs conforming to Salesforce’s SOAP header XML schema.\n- Include mandatory authentication headers such as SessionHeader with valid session IDs.\n- Validate all header values to prevent injection attacks or malformed requests.\n- Use headers to specify client information, API versioning, or organizational context as needed.\n- Ensure header names and structures strictly follow Salesforce SOAP API documentation.\n- Secure sensitive header data to prevent unauthorized access or leakage.\n- Update headers appropriately when session tokens expire or when switching contexts.\n- Test header configurations thoroughly to confirm compatibility with Salesforce API versions.\n- Consider using namespaces and proper XML serialization to maintain SOAP compliance.\n- Handle errors gracefully when headers are missing or invalid to improve debugging.\n\n**Examples**\n- `{ \"SessionHeader\": { \"sessionId\": \"00Dxx0000001gEREAY\" } }` — authenticates the session.\n- `{ \"CallOptions\": { \"client\": \"MyAppClient\" } }` — specifies client application details.\n- `{ \"LoginScopeHeader\": { \"organizationId\": \"00Dxx0000001gEREAY\" } }` — scopes login to a specific organization.\n- `{ \"CustomHeader\": { \"customField\": \"customValue\" } }` — example of a"},"batchSize":{"type":"number","description":"The number of records to be processed in each batch during Salesforce SOAP API operations. This property controls the size of data chunks sent or retrieved in a single API call, optimizing performance and resource utilization by balancing throughput and system resource consumption. Adjusting the batch size directly impacts the number of API calls made, the processing speed, memory usage, and network load, enabling fine-tuning of data operations to suit specific environment constraints and performance goals. Proper configuration of batchSize helps prevent API throttling, reduces the risk of timeouts, and ensures efficient handling of large datasets by segmenting data into manageable portions.\n\n**Field behavior**\n- Determines the count of records included in each batch request to the Salesforce SOAP API.\n- Directly influences the total number of API calls required when handling large datasets.\n- Affects processing throughput, latency, and memory usage during data operations.\n- Can be dynamically adjusted to optimize performance based on system capabilities and network conditions.\n- Impacts error handling and retry mechanisms by controlling batch granularity.\n- Influences network load and the likelihood of encountering API limits or timeouts.\n\n**Implementation guidance**\n- Choose a batch size that adheres to Salesforce API limits and recommended best practices.\n- Balance between larger batch sizes (which reduce API calls but increase memory and processing load) and smaller batch sizes (which increase API calls but reduce resource consumption).\n- Ensure the batchSize value is a positive integer within the allowed range for the specific Salesforce operation.\n- Continuously monitor API response times, error rates, and resource utilization to fine-tune the batch size for optimal performance.\n- Default to Salesforce-recommended batch sizes when uncertain or when starting configuration.\n- Consider network stability and latency when selecting batch size to minimize timeouts and failures.\n- Validate batch size compatibility with other integration components and middleware.\n- Test batch size settings in a staging environment before deploying to production to avoid unexpected failures.\n\n**Examples**\n- `batchSize: 200` — Processes 200 records per batch, a common default for many Salesforce operations.\n- `batchSize: 500` — Processes 500 records per batch, often the maximum allowed for certain Salesforce SOAP API calls.\n- `batchSize: 50` — Processes smaller batches suitable for environments with limited memory or network bandwidth.\n- `batchSize: 100` — A moderate batch size balancing throughput and resource usage in typical scenarios.\n\n**Important notes**\n- Salesforce SOAP API enforces maximum batch size limits (commonly 200"}},"description":"Specifies the comprehensive configuration settings required for integrating with the Salesforce SOAP API. This property includes all essential parameters to establish, authenticate, and manage SOAP-based communication with Salesforce services, enabling a wide range of operations such as creating, updating, deleting, querying, and executing actions on Salesforce objects. It encompasses detailed configurations for endpoint URLs, authentication credentials (such as session IDs or OAuth tokens), SOAP headers, session management details, timeout settings, and error handling mechanisms to ensure reliable, secure, and efficient interactions. The configuration must strictly adhere to Salesforce's WSDL definitions and security standards, supporting robust SOAP API usage within the application environment.\n\n**Field behavior**\n- Defines all connection, authentication, and operational parameters necessary for Salesforce SOAP API interactions.\n- Facilitates the construction and transmission of properly structured SOAP requests and the parsing of SOAP responses.\n- Manages the session lifecycle, including token acquisition, refresh, and timeout handling to maintain persistent connectivity.\n- Supports comprehensive error detection and handling of SOAP faults, API exceptions, and network issues to maintain integration stability.\n- Enables customization of SOAP headers, namespaces, and envelope structures as required by specific Salesforce API calls.\n- Allows configuration of retry policies and backoff strategies in response to transient errors or rate limiting.\n- Supports environment-specific configurations, such as production, non-production, or custom Salesforce domains.\n\n**Implementation guidance**\n- Configure endpoint URLs and authentication credentials accurately based on the Salesforce environment (production, non-production, or custom domains).\n- Validate SOAP envelope structure, namespaces, and compliance with the latest Salesforce WSDL specifications to prevent communication errors.\n- Implement robust error handling to capture, log, and respond appropriately to SOAP faults, API limit exceptions, and network failures.\n- Securely store and transmit sensitive information such as session IDs, OAuth tokens, passwords, and certificates, following best security practices.\n- Monitor Salesforce API usage limits and implement throttling or queuing mechanisms to avoid exceeding quotas and service disruptions.\n- Set timeout values thoughtfully to balance between preventing hanging requests and allowing sufficient time for valid operations to complete.\n- Regularly update the SOAP configuration to align with Salesforce API version changes, WSDL updates, and evolving security requirements.\n- Incorporate logging and monitoring to track SOAP request and response cycles for troubleshooting and performance optimization.\n- Ensure compatibility with Salesforce security protocols, including TLS versions and encryption standards.\n\n**Examples**\n- Specifying the Salesforce SOAP API endpoint URL along with a valid session ID or OAuth token for authentication.\n- Defining custom SOAP headers"},"sObjectType":{"type":"string","description":"sObjectType specifies the exact Salesforce object type that the API operation will target, such as standard objects like Account, Contact, Opportunity, or custom objects like CustomObject__c. This property is essential for defining the schema context of the API request, determining which fields are relevant, and directing the operation to the correct Salesforce data entity. It plays a critical role in data validation, processing logic, and endpoint routing within the Salesforce environment, ensuring that operations such as create, update, delete, or query are executed against the intended object type with precision. Accurate specification of sObjectType enables dynamic adjustment of request payloads and response handling based on the object’s schema and enforces compliance with Salesforce’s API naming conventions and permissions model.  \n**Field behavior:**  \n- Identifies the specific Salesforce object (sObject) type targeted by the API operation.  \n- Determines the applicable schema, field availability, and validation rules based on the selected object.  \n- Must exactly match a valid Salesforce object API name recognized by the connected Salesforce instance, including case sensitivity.  \n- Routes the API request to the appropriate Salesforce object endpoint for the intended operation.  \n- Influences dynamic field mapping, data transformation, and response parsing processes.  \n- Supports both standard and custom Salesforce objects, with custom objects requiring the “__c” suffix.  \n**Implementation guidance:**  \n- Validate sObjectType values against the current Salesforce object metadata in the target environment to prevent invalid requests.  \n- Enforce exact case-sensitive matching with Salesforce API object names (e.g., \"Account\" not \"account\").  \n- Support and recognize both standard objects and custom objects, ensuring custom objects include the “__c” suffix.  \n- Implement robust error handling for invalid, unsupported, or deprecated sObjectType values to provide clear feedback.  \n- Regularly update the list of allowed sObjectType values to reflect changes in the Salesforce schema or environment.  \n- Use sObjectType to dynamically tailor API request payloads, field selections, and response parsing logic.  \n- Consider object-level permissions and authentication scopes when processing requests involving specific sObjectTypes.  \n**Examples:**  \n- \"Account\"  \n- \"Contact\"  \n- \"Opportunity\"  \n- \"Lead\"  \n- \"CustomObject__c\"  \n- \"Case\"  \n- \"Campaign\"  \n**Important notes:**  \n- Custom Salesforce objects must be referenced using their exact API names, including the “__c” suffix.  \n-"},"idLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Please specify which field from your export data should map to External ID. It is very important to pick a field that is both unique, and guaranteed to always have a value when exported (otherwise the Salesforce Upsert operation will fail). If sample data is available then a select list should be displayed here to help you find the right field from your export data. If sample data is not available then you will need to manually type the field id that should map to External ID.\n\n**Field behavior**\n- Specifies which field from the incoming data contains the External ID value\n- **Used for upsert operations ONLY**\n- Maps the incoming data field to the Salesforce External ID field specified in `upsert.externalIdField`\n- Must reference a field that exists in the incoming data payload\n- The field value must be unique and always populated to ensure reliable matching\n- Generates field mapping at runtime to match incoming records with Salesforce records\n\n**When to use**\n- **upsert operations ONLY** — REQUIRED when operation is \"upsert\"\n- Works together with `upsert.externalIdField` to enable External ID matching\n\n**When not to use**\n- **update** — use `whereClause` instead\n- **delete** — use `whereClause` instead\n- **addupdate** — use `whereClause` instead\n- **insert with ignoreExisting** — use `whereClause` instead\n\n**Implementation guidance**\n- Choose a field from incoming data that contains unique, non-null values\n- Ensure the field is always populated in the incoming records to prevent operation failures\n- This field's value will be matched against the Salesforce External ID field specified in `upsert.externalIdField`\n- Use the exact field name as it appears in the incoming data (case-sensitive)\n- If sample data is available, validate that the field exists and has appropriate values\n- Test with sample data to ensure matching works correctly before production deployment\n\n**Example**\n\n**Upsert by External id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nMaps incoming field \"CustomerNumber\" → Salesforce field \"External_ID__c\"\n\n**How it works:**\n- Incoming record has `{\"CustomerNumber\": \"CUST-12345\", \"Name\": \"John\"}`\n- System extracts \"CUST-12345\" from CustomerNumber\n- Searches Salesforce External_ID__c field for \"CUST-12345\"\n- If found: updates the existing record\n- If not found: creates a new record with External_ID__c = \"CUST-12345\"\n\n**Common field names**\n- \"ExternalId\" — incoming field containing external identifier\n- \"CustomerId\" — incoming field with unique customer number\n- \"AccountNumber\" — incoming field with account number\n- \"SourceSystemId\" — incoming field from source system\n\n**Important notes**\n- **ONLY use for upsert operations**\n- For all other operations (update, delete, addupdate, insert with ignoreExisting), use `whereClause` instead\n- Field must exist in incoming data and be consistently populated\n- Works with `upsert.externalIdField` to enable External ID matching\n- Missing or null values will cause the upsert operation to fail\n- Field name is case-sensitive and must match exactly\n- If both `extract` and `whereClause` are set, `extract` takes precedence for upsert"},"whereClause":{"type":"string","description":"To determine if a record exists in Salesforce, define a SOQL query WHERE clause for the sObject type selected above. For example, if you are importing contact records with a unique email, your WHERE clause should identify contacts with the same email.\n\nHandlebars statements are allowed, and more complex WHERE clauses can contain AND and OR logic. For example, when the product name alone is not unique, you can define a WHERE clause to find any records where the product name exists AND the product also belongs to a specific product family, resulting in a unique product record that matches both criteria.\n\nTo find string values that contain spaces, wrap them in single quotes, for example:\n\nName = 'John Smith'\n\n**Field behavior**\n- Specifies a SOQL conditional expression to match existing records in Salesforce\n- Omit the \"WHERE\" keyword - only provide the condition expression\n- Supports Handlebars {{fieldName}} syntax to reference incoming data fields\n- Supports SOQL operators: =, !=, >, <, >=, <=, LIKE, IN, NOT IN\n- Supports logical operators: AND, OR, NOT\n- Can use parentheses for grouping complex conditions\n- The clause is evaluated for each incoming record to find matching Salesforce records\n\n**When to use (REQUIRED FOR)**\n- **update** — REQUIRED to identify which records to update\n- **delete** — REQUIRED to identify which records to delete\n- **addupdate** — REQUIRED to identify existing records (update if found, insert if not)\n- **insert with ignoreExisting: true** — REQUIRED to check if records exist (skip if found)\n\n**When not to use**\n- **upsert** — DO NOT use; use `extract` instead\n- **insert without ignoreExisting** — Not needed for simple inserts\n\n**Implementation guidance**\n- Write the clause following SOQL syntax rules, omitting the \"WHERE\" keyword\n- Use {{fieldName}} syntax to reference values from incoming records\n- Wrap string values containing spaces in single quotes: `Name = '{{FullName}}'`\n- Numeric and boolean values don't need quotes: `Age = {{Age}}`\n- Use indexed fields (Email, External ID) for better performance\n- Keep the clause selective to minimize records returned\n- Validate syntax before deployment to prevent runtime errors\n\n**Examples**\n\n**Update by Email**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nFinds records with matching Email for update\n\n**Insert with ignore existing by Email**\n```json\n{\n  \"operation\": \"insert\",\n  \"ignoreExisting\": true,\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nChecks if a record with matching Email exists; if found, skips creation\n\n**Addupdate (create or update) by Email**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nUpdates if Email matches, creates new record if not found\n\n**Delete by Account Number**\n```json\n{\n  \"operation\": \"delete\",\n  \"idLookup\": {\n    \"whereClause\": \"AccountNumber = '{{AcctNum}}'\"\n  }\n}\n```\nFinds records with matching AccountNumber for deletion\n\n**Complex and logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{ProductName}}' AND ProductFamily__c = '{{Family}}'\"\n  }\n}\n```\nMatches records where both Name AND ProductFamily match\n\n**Complex or logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}' OR Phone = '{{Phone}}'\"\n  }\n}\n```\nMatches records where Email OR Phone matches\n\n**String with spaces**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{FullName}}'\"\n  }\n}\n```\nNote: Single quotes around {{FullName}} handle names with spaces like \"John Smith\"\n\n**Important notes**\n- **REQUIRED** for update, delete, addupdate, and insert (with ignoreExisting)\n- DO NOT use for upsert operations - use `extract` instead\n- Use {{fieldName}} to reference incoming data fields\n- String values with spaces must be wrapped in single quotes\n- Numeric and boolean values don't need quotes\n- Poor clause design can impact performance and hit governor limits\n- Always test in a non-production environment before production deployment"}},"description":"Configuration for looking up existing Salesforce records. **This object should ONLY contain either `extract` OR `whereClause` - NOT both, and NOT operation or other properties.**\n\n**What this object contains**\nThis object should contain ONLY ONE of these properties:\n- `extract` — for upsert operations ONLY\n- `whereClause` — for update, delete, addupdate, and insert (with ignoreExisting)\n\n**DO NOT include operation, upsert, or any other properties here** - those belong at the parent salesforce level.\n\n**Field behavior**\n- Contains two strategies for finding existing records: `extract` and `whereClause`\n- **`extract`**: Used ONLY for upsert operations\n- **`whereClause`**: Used for update, delete, addupdate, and insert (with ignoreExisting)\n- Generates field mapping or SOQL queries at runtime to match records\n- Critical for preventing duplicate record creation and enabling record updates\n\n**When to use each field**\n\n**Use `extract` (upsert ONLY)**\n- **upsert** — REQUIRED: Maps incoming field to External ID field\n\n**Use `whereClause` (all other operations)**\n- **update** — REQUIRED: Identifies records to update via SOQL query\n- **delete** — REQUIRED: Identifies records to delete via SOQL query\n- **addupdate** — REQUIRED: Identifies existing records via SOQL query\n- **insert with ignoreExisting: true** — REQUIRED: Checks if records exist via SOQL query\n\n**Leave empty**\n- **insert without ignoreExisting** — No lookup needed for simple inserts\n\n**What to set (idLookup object content)**\n\n**For upsert**\n```json\n{\n  \"extract\": \"CustomerNumber\"\n}\n```\n\n**For update/delete/addupdate**\n```json\n{\n  \"whereClause\": \"Email = '{{Email}}'\"\n}\n```\n\n**Parent context examples (for reference)**\nThese show the complete parent-level structure. **idLookup itself should only contain extract or whereClause.**\n\n**Upsert (parent context)**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\n\n**Update (parent context)**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\n\n**Implementation guidance**\n- Use `extract` ONLY for upsert operations (with `upsert.externalIdField`)\n- Use `whereClause` for update, delete, addupdate, and insert with ignoreExisting\n- Do not set both `extract` and `whereClause` for the same operation\n- **ONLY set extract or whereClause - do not nest operation or other properties**\n- Ensure the lookup strategy matches the operation type\n- Test thoroughly in a non-production environment before production deployment\n\n**Important notes**\n- **This object ONLY contains `extract` or `whereClause`**\n- **`extract`** is for upsert ONLY - maps to External ID field\n- **`whereClause`** is for all other operations requiring lookups\n- Do NOT nest `operation`, `upsert`, or other properties here - those belong at the parent salesforce level\n- For upsert: `extract` + `upsert.externalIdField` work together (both at salesforce level)\n- For other operations: `whereClause` enables SOQL-based record matching\n- Incorrect configuration can cause duplicates or failed operations"},"upsert":{"type":"object","properties":{"externalIdField":{"type":"string","description":"Specifies the API name of the External ID field in Salesforce that will be used to match records during an upsert operation. This is the TARGET field in Salesforce, while `idLookup.extract` specifies the SOURCE field from incoming data.\n\n**Field behavior**\n- Identifies which Salesforce field serves as the external ID for record matching\n- Must be a field marked as \"External ID\" in Salesforce object configuration\n- Works together with `idLookup.extract` to enable upsert matching\n- The field must be indexed and unique in Salesforce\n- Supports text, number, and email field types configured as external IDs\n- Required when `operation: \"upsert\"`\n\n**How it works with idLookup.extract**\n- **`idLookup.extract`**: Specifies which incoming data field contains the matching value (e.g., \"CustomerNumber\")\n- **`externalIdField`**: Specifies which Salesforce field to match against (e.g., \"External_ID__c\")\n- Together they create the match: incoming \"CustomerNumber\" → Salesforce \"External_ID__c\"\n\n**Implementation guidance**\n- Use the exact Salesforce API name of the field (case-sensitive)\n- Verify the field is marked as \"External ID\" in Salesforce object settings\n- Ensure the field is indexed for optimal performance\n- Confirm the integration user has read/write access to this field\n- The field must have unique values to prevent ambiguous matches\n- Test in a non-production environment before production to verify configuration\n\n**Examples**\n\n**Standard External id field**\n```\n\"AccountNumber\"\n```\nStandard Account field marked as External ID\n\n**Custom External id field**\n```\n\"Custom_External_Id__c\"\n```\nCustom field marked as External ID\n\n**Email as External id**\n```\n\"Email\"\n```\nEmail field configured as External ID on Contact\n\n**NetSuite id as External id**\n```\n\"netsuite_conn__NetSuite_Id__c\"\n```\nNetSuite connector External ID field\n\n**How it works (Parent Context)**\nAt the parent `salesforce` level, the complete configuration looks like:\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nWhere:\n- `operation: \"upsert\"` — at salesforce level\n- `idLookup.extract: \"CustomerNumber\"` — incoming field, at salesforce level\n- `upsert.externalIdField: \"External_ID__c\"` — **THIS field**, inside upsert object\n\n**Common external id fields**\n- \"AccountNumber\" — standard Account field\n- \"Email\" — when configured as External ID on Contact\n- \"Custom_External_Id__c\" — custom field on any object\n- \"External_ID__c\" — common naming convention\n- \"Source_System_Id__c\" — for multi-system integrations\n\n**Important notes**\n- Field must be explicitly marked as \"External ID\" in Salesforce\n- Not all fields can be External IDs (must be text, number, or email type)\n- Missing or null values in this field will cause upsert to fail\n- If multiple records match the External ID, upsert will fail\n- Always use with `idLookup.extract` for upsert operations"}},"description":"Configuration for Salesforce upsert operations. This object contains ONLY the `externalIdField` property.\n\n**CRITICAL: This component is REQUIRED when `operation: \"upsert\"`. Always include this component for upsert operations.**\n\n**Field behavior**\n- **REQUIRED** when `operation: \"upsert\"`\n- Contains configuration specific to upsert behavior\n- **ONLY contains ONE property: `externalIdField`**\n- Works in conjunction with `idLookup.extract` (which is at the SAME level as this upsert object, NOT nested within it)\n\n**When to use**\n- **REQUIRED** when `operation` is \"upsert\" - MUST be included\n- Leave undefined for insert, update, delete, or addupdate operations\n- Without this object, upsert operations will fail\n\n**Why IT'S required**\n- Upsert operations need to know which Salesforce field to use for matching\n- The `externalIdField` property specifies the Salesforce External ID field\n- This works with `idLookup.extract` to enable record matching\n- Missing this object will cause the import to fail\n\n**Decision**\n**Return true/include this component if `operation: \"upsert\"` is set. This is REQUIRED for all upsert operations.**\n\n**What to set**\n**This object should ONLY contain:**\n```json\n{\n  \"externalIdField\": \"External_ID__c\"\n}\n```\n\n**DO NOT include operation, idLookup, or any other properties here** - those belong at the parent salesforce level.\n\n**Parent context (for reference only)**\nAt the parent `salesforce` level, you will also set:\n- `operation: \"upsert\"` (at salesforce level)\n- `idLookup.extract` (at salesforce level)\n- `upsert.externalIdField` (THIS object)\n\n**Implementation guidance**\n- Always set `externalIdField` when using upsert\n- Ensure the specified External ID field exists in Salesforce\n- Verify the field is marked as \"External ID\" in Salesforce\n- **ONLY set the externalIdField property - nothing else**\n\n**Important notes**\n- **This object ONLY contains `externalIdField`**\n- Do NOT nest `operation`, `idLookup`, or other properties here\n- Those properties belong at the parent salesforce level, not nested in this upsert object\n- May be extended with additional upsert-specific options in the future\n- Only relevant when operation is \"upsert\""},"upsertpicklistvalues":{"type":"object","properties":{"type":{"type":"string","enum":["picklist","multipicklist"],"description":"Specifies the type of picklist field being managed. Salesforce supports single-select picklists and multi-select picklists.\n\n**Field behavior**\n- Determines whether the picklist allows single or multiple value selection\n- Affects UI rendering and validation in Salesforce\n- Automatically converted to lowercase before processing\n\n**Picklist types**\n\n****picklist****\n- Standard single-select picklist\n- User can select only one value at a time\n- Most common picklist type\n\n****multipicklist****\n- Multi-select picklist\n- User can select multiple values (separated by semicolons)\n- Use for categories, tags, or multi-value selections\n\n**Examples**\n- \"picklist\" — for Status, Priority, Category fields\n- \"multipicklist\" — for Tags, Skills, Interests fields"},"fullName":{"type":"string","description":"The API name of the picklist field in Salesforce, including the object name.\n\n**Field behavior**\n- Fully qualified field name: ObjectName.FieldName__c\n- Must match exact Salesforce API name\n- Case-sensitive\n- Required for picklist value upsert operations\n\n**Examples**\n- \"Account.MyPicklist__c\"\n- \"Contact.Status__c\"\n- \"CustomObject__c.Category__c\""},"label":{"type":"string","description":"The display label for the picklist field in Salesforce UI.\n\n**Field behavior**\n- Human-readable label shown in Salesforce interface\n- Can contain spaces and special characters\n- Does not need to match API name\n\n**Examples**\n- \"My Picklist\"\n- \"Product Category\"\n- \"Customer Status\""},"visibleLines":{"type":"number","description":"For multi-select picklists, specifies how many lines are visible in the UI without scrolling.\n\n**Field behavior**\n- Only applies to multipicklist fields\n- Controls height of the picklist selection box\n- Default is typically 4-5 lines\n\n**Examples**\n- 3 — shows 3 values at once\n- 5 — shows 5 values at once\n- 10 — shows more values for large lists"}},"description":"Configuration for managing Salesforce picklist field metadata when using the `upsertpicklistvalues` operation. This enables dynamic creation or updates to picklist field definitions.\n\n**Field behavior**\n- Used ONLY when `operation: \"upsertpicklistvalues\"`\n- Manages picklist field metadata, not picklist values themselves\n- Enables programmatic picklist field management\n- Requires appropriate Salesforce metadata permissions\n\n**When to use**\n- When operation is set to \"upsertpicklistvalues\"\n- For managing picklist field definitions programmatically\n- When syncing picklist configurations between orgs\n- For automated picklist field deployment\n\n**Important notes**\n- This manages the FIELD itself, not the individual picklist VALUES\n- Requires Salesforce API and Metadata API permissions\n- Should only be set when operation is \"upsertpicklistvalues\"\n- Most imports won't use this - it's for metadata management"},"removeNonSubmittableFields":{"type":"boolean","description":"Indicates whether fields that are not eligible for submission to Salesforce—such as read-only, system-generated, formula, or otherwise restricted fields—should be automatically excluded from the data payload before it is sent. This feature helps prevent submission errors by ensuring that only valid, submittable fields are included in the API request, thereby improving data integrity and reducing the likelihood of operation failures. It operates exclusively on the outgoing submission payload without modifying the original source data, making it a safe and effective way to sanitize data before interaction with Salesforce APIs.\n\n**Field behavior**\n- When enabled (set to true), the system analyzes the outgoing data payload and removes any fields identified as non-submittable based on Salesforce metadata, field-level security, and API constraints.\n- When disabled (set to false) or omitted, all fields present in the payload are submitted as-is, which may lead to errors if restricted, read-only, or system-managed fields are included.\n- Enhances data integrity by proactively filtering out fields that could cause rejection or failure during Salesforce data operations.\n- Applies only to the outgoing submission payload, ensuring the original source data remains unchanged and intact.\n- Dynamically adapts to different Salesforce objects, API versions, and user permissions by referencing up-to-date metadata and security settings.\n\n**Implementation guidance**\n- Enable this flag in integration scenarios where the data schema may include fields that Salesforce does not accept for the target object or API version.\n- Maintain an up-to-date cache or real-time access to Salesforce metadata to accurately reflect field-level permissions, submittability, and API changes.\n- Implement logging or reporting mechanisms to track which fields are removed, aiding in auditing, troubleshooting, and transparency.\n- Use in conjunction with other data validation, transformation, and permission-checking steps to ensure clean, compliant data submissions.\n- Thoroughly test in development and staging environments to understand the impact on data flows, error handling, and downstream processes.\n- Consider user roles and permissions, as submittability may vary depending on the authenticated user's access rights and profile settings.\n\n**Examples**\n- `removeNonSubmittableFields: true` — Automatically removes read-only or system-managed fields such as CreatedDate, LastModifiedById, or formula fields before submission.\n- `removeNonSubmittableFields: false` — Submits all fields, potentially causing errors if restricted or read-only fields are included.\n- Omitting the property defaults to false, meaning no automatic removal of non-submittable"},"document":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the document within the Salesforce system, serving as the primary key that distinctly identifies each document record. This identifier is immutable once assigned and must not be altered or reused to maintain data integrity. It is essential for referencing the document in API calls, queries, integrations, and any operations involving the document. The ID is a base-62 encoded string that can be either 15 characters long, which is case-sensitive, or 18 characters long, which includes a case-insensitive checksum to enhance reliability in external systems. Proper handling of this ID ensures accurate linkage and manipulation of document records across various Salesforce processes and integrations.  \n**Field behavior:**  \n- Acts as the primary key uniquely identifying a document record within Salesforce.  \n- Immutable after creation; should never be changed or reassigned.  \n- Used consistently to reference the document across API calls, queries, and integrations.  \n- Case-sensitive and may vary in length (15 or 18 characters).  \n- Serves as a critical reference point for establishing relationships with other records.  \n**Implementation guidance:**  \n- Must be generated exclusively by Salesforce or the managing system to guarantee uniqueness.  \n- Should be handled as a string to accommodate Salesforce’s ID formats.  \n- Validate incoming IDs against Salesforce ID format standards to ensure correctness.  \n- Avoid manual generation or modification of IDs to prevent data inconsistencies.  \n- Use the 18-character format when possible to leverage the checksum for case-insensitive environments.  \n**Examples:**  \n- \"0151t00000ABCDE\"  \n- \"0151t00000ABCDEAAA\"  \n**Important notes:**  \n- IDs are case-sensitive; 15-character IDs are case-sensitive, while 18-character IDs include a case-insensitive checksum.  \n- Do not manually create or alter IDs; always use Salesforce-generated values.  \n- This ID is critical for performing document updates, deletions, retrievals, and establishing relationships.  \n- Using the correct ID format is essential for API compatibility and data integrity.  \n**Dependency chain:**  \n- Required for all operations on salesforce.document entities, including CRUD actions.  \n- Other fields and processes may reference this ID to link related records or trigger actions.  \n- Integral to workflows, triggers, and integrations that depend on document identification.  \n**Technical** DETAILS:  \n- Salesforce record IDs are base-62 encoded strings.  \n- IDs can be 15 characters (case-sensitive) or 18"},"name":{"type":"string","description":"The name of the document as stored in Salesforce, serving as a unique and human-readable identifier within the Salesforce environment. This string value represents the document's title or label, enabling users and systems to easily recognize, retrieve, and manage the document across user interfaces, search functionalities, and API interactions. It is a critical attribute used to reference the document in various operations such as creation, updates, searches, display contexts, and integration workflows, ensuring clear identification and organization within folders or libraries. The name plays a pivotal role in document management by facilitating sorting, filtering, and categorization, and it must adhere to Salesforce’s naming conventions and constraints to maintain system integrity and usability.\n\n**Field behavior**\n- Acts as the primary human-readable identifier for the document.\n- Used in UI elements, search queries, and API calls to locate or reference the document.\n- Typically mandatory when creating or updating document records.\n- Should be unique within the scope of the relevant Salesforce object, folder, or library to avoid ambiguity.\n- Influences sorting, filtering, and organization of documents within lists or folders.\n- Changes to the name may trigger updates or validations in dependent systems or references.\n- Supports international characters to accommodate global users.\n\n**Implementation guidance**\n- Choose descriptive yet concise names to enhance clarity, usability, and discoverability.\n- Validate input to exclude unsupported special characters and ensure compliance with Salesforce naming rules.\n- Enforce uniqueness constraints within the relevant context to prevent conflicts and confusion.\n- Consider the impact of renaming documents on existing references, hyperlinks, integrations, and audit trails.\n- Account for potential length restrictions and implement truncation or user prompts as necessary.\n- Support internationalization by allowing UTF-8 encoded characters where appropriate.\n- Implement validation to prevent use of reserved words or prohibited characters.\n\n**Examples**\n- \"Quarterly Sales Report Q1 2024\"\n- \"Employee Handbook\"\n- \"Project Plan - Alpha Release\"\n- \"Marketing Strategy Overview\"\n- \"Client Contract - Acme Corp\"\n\n**Important notes**\n- The maximum length of the name may be limited (commonly up to 255 characters), depending on Salesforce configuration.\n- Renaming a document can affect external references, hyperlinks, or integrations relying on the original name.\n- Case sensitivity in name comparisons may vary based on Salesforce settings and should be considered when implementing logic.\n- Special characters and whitespace handling should align with Salesforce’s accepted standards to avoid errors.\n- Avoid using reserved words or prohibited characters to ensure compatibility and"},"folderId":{"type":"string","description":"The unique identifier of the folder within Salesforce where the document is stored. This ID is essential for organizing, categorizing, and managing documents efficiently within the Salesforce environment. It enables precise association of documents to specific folders, facilitating streamlined retrieval, access control, and hierarchical organization of document assets. By linking a document to a folder, it inherits the folder’s permissions and visibility settings, ensuring consistent access management and supporting structured document workflows. Proper use of this identifier allows for effective filtering, querying, and management of documents based on their folder location, which is critical for maintaining an organized and secure document repository.\n\n**Field behavior**\n- Identifies the exact folder containing the document.\n- Associates the document with a designated folder for structured organization.\n- Must reference a valid, existing folder ID within Salesforce.\n- Typically represented as a string conforming to Salesforce ID formats (15- or 18-character).\n- Influences document visibility and access based on folder permissions.\n- Changing the folderId moves the document to a different folder, affecting its context and access.\n- Inherits folder-level sharing and security settings automatically.\n\n**Implementation guidance**\n- Verify that the folderId exists and is accessible in the Salesforce instance before assignment.\n- Validate the folderId format to ensure it matches Salesforce’s 15- or 18-character ID standards.\n- Use this field to filter, query, or categorize documents by their folder location in API operations.\n- When creating or updating documents, setting this field assigns or moves the document to the specified folder.\n- Handle permission checks to ensure the user or integration has rights to assign or access the folder.\n- Consider the impact on sharing rules, workflows, and automation triggered by folder changes.\n- Ensure case sensitivity is preserved when handling folder IDs.\n\n**Examples**\n- \"00l1t000003XyzA\" (example of a Salesforce folder ID)\n- \"00l5g000004AbcD\"\n- \"00l9m00000FghIj\"\n\n**Important notes**\n- The folderId must be valid and the folder must be accessible by the user or integration performing the operation.\n- Using an incorrect or non-existent folderId will cause errors or result in improper document categorization.\n- Folder-level permissions in Salesforce can restrict the ability to assign documents or access folder contents.\n- Changes to folderId may affect document sharing, visibility settings, and trigger related workflows.\n- Folder IDs are case-sensitive and must be handled accordingly.\n- Moving documents between folders"},"contentType":{"type":"string","description":"The MIME type of the document, specifying the exact format of the file content stored within the Salesforce document. This field enables systems and applications to accurately identify, process, render, or handle the document by indicating its media type, such as PDF, image, audio, or text formats. It adheres to standard MIME type conventions (type/subtype) and may include additional parameters like character encoding when necessary. Properly setting this field ensures seamless integration, correct display, appropriate application usage, and reliable content negotiation across diverse platforms and services. It plays a critical role in workflows, security scanning, compliance validation, and API interactions by guiding how the document is stored, transmitted, and interpreted.\n\n**Field behavior**\n- Defines the media type of the document content to guide processing, rendering, and handling.\n- Identifies the specific file format (e.g., PDF, JPEG, DOCX) for compatibility and validation purposes.\n- Facilitates content negotiation and appropriate response handling between systems and applications.\n- Typically formatted as a standard MIME type string (type/subtype), optionally including parameters such as charset.\n- Influences document handling in workflows, security scans, user interfaces, and API interactions.\n- Serves as a key attribute for determining how the document is stored, transmitted, and displayed.\n\n**Implementation guidance**\n- Always use valid MIME types conforming to official standards (e.g., \"application/pdf\", \"image/png\") to ensure broad compatibility.\n- Verify that the contentType accurately reflects the actual file content to prevent processing errors or misinterpretation.\n- Update the contentType promptly if the document format changes to maintain consistency and reliability.\n- Utilize this field to enable correct content handling in APIs, integrations, client applications, and automated workflows.\n- Include relevant parameters (such as charset for text-based documents) when applicable to provide additional context.\n- Consider validating the contentType against the file extension and content during upload or processing to enhance data integrity.\n\n**Examples**\n- \"application/pdf\" for PDF documents.\n- \"image/jpeg\" for JPEG image files.\n- \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" for Microsoft Word DOCX files.\n- \"text/plain; charset=utf-8\" for plain text files with UTF-8 encoding.\n- \"application/json\" for JSON formatted documents.\n- \"audio/mpeg\" for MP3 audio files.\n\n**Important notes**\n- Incorrect or mismatched contentType values can cause improper document rendering, processing failures,"},"developerName":{"type":"string","description":"developerName is a unique, developer-friendly identifier assigned to the document within the Salesforce environment. It acts as a stable and consistent reference used primarily in programmatic contexts such as Apex code, API calls, metadata configurations, deployment scripts, and integration workflows. This identifier ensures reliable access, manipulation, and management of the document across various Salesforce components and external systems. The value should be concise, descriptive, and strictly adhere to Salesforce naming conventions to prevent conflicts and ensure maintainability. Typically, it employs camelCase or underscores, excludes spaces and special characters, and is designed to remain unchanged over time to avoid breaking dependencies or integrations.\n\n**Field behavior**\n- Serves as a unique and immutable identifier for the document within the Salesforce org.\n- Widely used in code, APIs, metadata, deployment processes, and integrations to reference the document reliably.\n- Should remain stable and not be altered frequently to maintain integration and system stability.\n- Enforces naming conventions that disallow spaces and special characters, favoring camelCase or underscores.\n- Case-insensitive in many contexts but consistent casing is recommended for clarity and maintainability.\n- Acts as a key reference point in metadata relationships and dependency mappings.\n\n**Implementation guidance**\n- Ensure the developerName is unique within the Salesforce environment to prevent naming collisions.\n- Follow Salesforce naming rules: start with a letter, use only alphanumeric characters and underscores.\n- Avoid reserved keywords, spaces, special characters, and punctuation marks.\n- Keep the identifier concise yet descriptive enough to clearly convey the document’s purpose or function.\n- Validate the developerName against Salesforce’s character set, length restrictions, and naming policies before deployment.\n- Plan for the developerName to remain stable post-deployment to avoid breaking references and integrations.\n- Use consistent casing conventions (camelCase or underscores) to improve readability and reduce errors.\n- Incorporate meaningful terms that reflect the document’s role or content for easier identification.\n\n**Examples**\n- \"InvoiceTemplate\"\n- \"CustomerAgreementDoc\"\n- \"SalesReport2024\"\n- \"ProductCatalog_v2\"\n- \"EmployeeOnboardingGuide\"\n\n**Important notes**\n- Changing the developerName after deployment can disrupt integrations, metadata references, and dependent components.\n- It is distinct from the document’s display name, which can include spaces and be more user-friendly.\n- While Salesforce treats developerName as case-insensitive in many scenarios, maintaining consistent casing improves readability and maintenance.\n- Commonly referenced in metadata APIs, deployment tools, automation scripts, and"},"isInternalUseOnly":{"type":"boolean","description":"isInternalUseOnly indicates whether the document is intended exclusively for internal use within the organization and must not be shared with external parties. This flag is critical for protecting sensitive, proprietary, or confidential information by restricting access strictly to authorized internal users. It serves as a definitive marker within the document’s metadata to enforce internal visibility policies, prevent accidental or unauthorized external distribution, and support compliance with organizational security standards. Proper handling of this flag ensures that internal communications, strategic plans, financial data, and other sensitive materials remain protected from exposure outside the company.\n\n**Field behavior**\n- When set to true, the document is strictly restricted to internal users and must not be accessible or shared with any external entities.\n- When false or omitted, the document may be accessible to external users, subject to other access control mechanisms.\n- Commonly used to flag documents containing sensitive business strategies, internal communications, proprietary data, or confidential policies.\n- Influences user interface elements and API responses to hide, restrict, or clearly label document visibility accordingly.\n- Changes to this flag can trigger notifications, workflow actions, or audit events to ensure proper handling and compliance.\n- May affect document indexing, search results, and export capabilities to prevent unauthorized dissemination.\n\n**Implementation guidance**\n- Always verify this flag before granting document access in both user interfaces and backend APIs to ensure compliance with internal use policies.\n- Use in conjunction with role-based access controls, group memberships, document classification labels, and data sensitivity tags to enforce comprehensive security.\n- Carefully manage updates to this flag to prevent inadvertent exposure of confidential information; consider implementing approval workflows and multi-level reviews for changes.\n- Implement audit logging for all changes to this flag to support compliance, traceability, and forensic investigations.\n- Ensure all integrated systems, third-party applications, and data export mechanisms respect this flag to maintain consistent access restrictions.\n- Provide clear UI indicators, warnings, or access restrictions to users when accessing documents marked as internal use only.\n- Regularly review and validate the accuracy of this flag to maintain up-to-date security postures.\n\n**Examples**\n- true: A confidential internal report on upcoming product launches accessible only to employees.\n- false: A publicly available user manual or product datasheet intended for customers and partners.\n- true: Internal HR policies and procedures documents restricted to company staff.\n- false: Press releases or marketing materials distributed externally.\n- true: Financial forecasts and budget plans shared exclusively within the finance department.\n- true: Internal audit findings and compliance reports"},"isPublic":{"type":"boolean","description":"Indicates whether the document is publicly accessible to all users or restricted exclusively to authorized users within the system. This property governs the document's visibility and access permissions, determining if authentication is required to view its content. When set to true, the document becomes openly available without any access controls, allowing anyone—including unauthenticated users—to access it freely. Conversely, setting this flag to false enforces security measures that limit access based on user roles, permissions, and organizational sharing rules, ensuring that only authorized personnel can view the document. This setting directly impacts how the document appears in search results, sharing interfaces, and external indexing services, making it a critical factor in managing data privacy, compliance, and information governance.\n\n**Field behavior**\n- When true, the document is accessible by anyone, including unauthenticated users.\n- When false, access is restricted to users with explicit permissions or roles.\n- Directly influences the document’s visibility in search results, sharing interfaces, and external indexing.\n- Changes to this property take immediate effect, altering who can view the document.\n- Modifications may trigger updates in access control enforcement mechanisms.\n\n**Implementation guidance**\n- Use this property to clearly define and enforce document sharing and visibility policies.\n- Always set to false for sensitive, confidential, or proprietary documents to prevent unauthorized access.\n- Implement strict validation to ensure only users with appropriate administrative rights can modify this property.\n- Consider organizational compliance requirements, privacy regulations, and data security standards before making documents public.\n- Regularly monitor and audit changes to this property to maintain security oversight and compliance.\n- Plan changes carefully, as toggling this setting can have immediate and broad impact on document accessibility.\n\n**Examples**\n- true: A publicly available product catalog, marketing brochure, or press release intended for external audiences.\n- false: Internal project plans, financial reports, HR documents, or any content restricted to company personnel.\n\n**Important notes**\n- Making a document public may expose it to indexing by search engines if accessible via public URLs.\n- Changes to this property have immediate effect on document accessibility; ensure appropriate change management.\n- Ensure alignment with corporate governance, legal requirements, and data protection policies when toggling public access.\n- Public documents should be reviewed periodically to confirm that their visibility remains appropriate.\n- Unauthorized changes to this property can lead to data leaks or compliance violations.\n\n**Dependency chain**\n- Access control depends on user roles, profiles, and permission sets configured within the system.\n- Interacts"}},"description":"The 'document' property represents a digital file or record associated with a Salesforce entity, encompassing a wide range of file types such as attachments, reports, images, contracts, spreadsheets, and other files stored within the Salesforce platform. It acts as a comprehensive container that includes both the file's binary content—typically encoded in base64—and its associated metadata, such as file name, description, MIME type, and tags. This property enables seamless integration, retrieval, upload, update, and management of files within Salesforce-related workflows and processes. By linking relevant documents directly to Salesforce records, it enhances data organization, accessibility, collaboration, and compliance across various business functions. Additionally, it supports versioning, audit trails, and can trigger automation processes like workflows, validation rules, and triggers, ensuring robust document lifecycle management within the Salesforce ecosystem.\n\n**Field behavior**\n- Contains both the binary content (usually base64-encoded) and descriptive metadata of a file linked to a Salesforce record.\n- Supports a diverse array of file types including PDFs, images, reports, contracts, spreadsheets, and other common document formats.\n- Facilitates operations such as uploading new files, retrieving existing files, updating metadata, deleting documents, and managing versions within Salesforce.\n- May be required or optional depending on the specific API operation, object context, or organizational business rules.\n- Changes to the document content or metadata can trigger Salesforce workflows, triggers, validation rules, or automation processes.\n- Maintains versioning and audit trails when integrated with Salesforce Content or Files features.\n- Respects Salesforce security and sharing settings to control access and protect sensitive information.\n\n**Implementation guidance**\n- Encode the document content in base64 format when transmitting via API to ensure data integrity and compatibility.\n- Validate file size and type against Salesforce platform limits and organizational policies before upload to prevent errors.\n- Provide comprehensive metadata including file name, description, content type (MIME type), file extension, and relevant tags or categories to facilitate search, classification, and governance.\n- Manage user permissions and access controls in accordance with Salesforce security and sharing settings to safeguard sensitive data.\n- Use appropriate Salesforce API endpoints (e.g., /sobjects/Document, Attachment, ContentVersion) and HTTP methods aligned with the document type and intended operation.\n- Implement robust error handling to manage responses related to file size limits, unsupported formats, permission denials, or network issues.\n- Leverage Salesforce Content or Files features for enhanced document management capabilities such as version control, sharing,"},"attachment":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the attachment within the Salesforce system, serving as the immutable primary key for the attachment object. This ID is essential for accurately referencing, retrieving, updating, or deleting the specific attachment record through API calls. It must conform to Salesforce's standardized ID format, which can be either a 15-character case-sensitive or an 18-character case-insensitive alphanumeric string, with the 18-character version preferred for integrations due to its case insensitivity and reduced risk of errors. While the ID itself does not contain metadata about the attachment's content, it is crucial for linking the attachment to related Salesforce records and ensuring precise operations on the attachment resource across the platform.\n\n**Field behavior**\n- Acts as the unique and immutable primary key for the attachment record.\n- Required for all API operations involving the attachment, including retrieval, updates, and deletions.\n- Ensures consistent and precise identification of the attachment in all Salesforce API interactions.\n- Remains constant and unchangeable throughout the lifecycle of the attachment once created.\n\n**Implementation guidance**\n- Must strictly adhere to Salesforce ID formats: either 15-character case-sensitive or 18-character case-insensitive alphanumeric strings.\n- Should be dynamically obtained from Salesforce API responses when creating or querying attachments to ensure accuracy.\n- Avoid hardcoding or manually entering IDs to prevent data integrity issues and API errors.\n- Validate the ID format programmatically before use to ensure compatibility with Salesforce API requirements.\n- Prefer using the 18-character ID in integration scenarios to minimize case sensitivity issues.\n\n**Examples**\n- \"00P1t00000XyzAbCDE\"\n- \"00P1t00000XyzAbCDEAAA\"\n- \"00P3t00000LmNOpQR1\"\n\n**Important notes**\n- The 15-character ID is case-sensitive, whereas the 18-character ID is case-insensitive and recommended for integration scenarios.\n- Using an incorrect, malformed, or outdated ID will result in API errors or failed operations.\n- The ID does not convey any descriptive information about the attachment’s content or metadata.\n- Always retrieve and use the ID directly from Salesforce API responses to ensure validity and prevent errors.\n- The ID is unique within the Salesforce org and cannot be reused or duplicated.\n\n**Dependency chain**\n- Generated automatically by Salesforce upon creation of the attachment record.\n- Utilized by other API calls and properties that require precise attachment identification.\n- Linked to parent or related Salesforce objects, integrating the attachment"},"name":{"type":"string","description":"The name property represents the filename of the attachment within Salesforce and serves as the primary identifier for the attachment. It is crucial for enabling easy recognition, retrieval, and management of attachments both in the user interface and through API operations. This property typically includes the file extension, which specifies the file type and ensures proper handling, display, and processing of the file across different systems and platforms. The filename should be meaningful, descriptive, and concise to provide clarity and ease of use, especially when managing multiple attachments linked to a single parent record. Adhering to proper naming conventions helps maintain organization, improves searchability, and prevents conflicts or ambiguities that could arise from duplicate or unclear filenames.\n\n**Field behavior**\n- Stores the filename of the attachment as a string value.\n- Acts as the key identifier for the attachment within the Salesforce environment.\n- Displayed prominently in the Salesforce UI and API responses when viewing or managing attachments.\n- Should be unique within the context of the parent record to avoid confusion.\n- Includes file extensions (e.g., .pdf, .jpg, .docx) to denote the file type and facilitate correct processing.\n- Case sensitivity may apply depending on the context, affecting retrieval and display.\n- Used in URL generation and API endpoints, impacting accessibility and referencing.\n\n**Implementation guidance**\n- Validate filenames to exclude prohibited characters (such as \\ / : * ? \" < > |) and other special characters that could cause errors or security vulnerabilities.\n- Ensure the filename length does not exceed Salesforce’s maximum limit, typically 255 characters.\n- Always include the appropriate and accurate file extension to enable correct file type recognition.\n- Use clear, descriptive, and consistent naming conventions that reflect the content or purpose of the attachment.\n- Perform validation checks before upload to prevent failures or inconsistencies.\n- Avoid renaming attachments post-upload to maintain integrity of references and links.\n- Consider localization and encoding standards to support international characters where applicable.\n\n**Examples**\n- \"contract_agreement.pdf\"\n- \"profile_picture.jpg\"\n- \"sales_report_q1_2024.xlsx\"\n- \"meeting_notes.txt\"\n- \"project_plan_v2.docx\"\n\n**Important notes**\n- Filenames are case-sensitive in certain contexts, which can impact retrieval and display.\n- Avoid special characters that may interfere with URLs, file systems, or API processing.\n- Renaming an attachment after upload can disrupt existing references or links to the file.\n- Consistency between the filename and the actual content enhances user"},"parentId":{"type":"string","description":"The unique identifier of the parent Salesforce record to which the attachment is linked. This field establishes a direct and essential relationship between the attachment and its parent object, such as an Account, Contact, Opportunity, or any other Salesforce record type that supports attachments. It ensures that attachments are correctly associated within the Salesforce data model, enabling accurate retrieval, management, and organization of attachments in relation to their parent records. By linking attachments to their parent records, this field facilitates data integrity, efficient querying, and seamless navigation between related records. It is a mandatory field when creating or updating attachments and must reference a valid, existing Salesforce record ID. Proper use of this field guarantees referential integrity, enforces access controls based on parent record permissions, and impacts the visibility and lifecycle of the attachment within the Salesforce environment.\n\n**Field behavior**\n- Represents the Salesforce record ID that the attachment is associated with.\n- Mandatory when creating or updating an attachment to specify its parent record.\n- Must correspond to an existing Salesforce record ID of a valid object type that supports attachments.\n- Enables querying, filtering, and reporting of attachments based on their parent record.\n- Enforces referential integrity between attachments and parent records.\n- Changes to the parent record, including deletion or modification, can affect the visibility, accessibility, and lifecycle of the attachment.\n- Access to the attachment is governed by the permissions and sharing settings of the parent record.\n\n**Implementation guidance**\n- Ensure the parentId is a valid 15- or 18-character Salesforce record ID, with preference for 18-character IDs to avoid case sensitivity issues.\n- Verify the existence and active status of the parent record before assigning the parentId.\n- Confirm that the parent object type supports attachments to prevent errors during attachment operations.\n- Handle user permissions and sharing settings to ensure appropriate access rights to the parent record and its attachments.\n- Use Salesforce APIs or metadata services to validate the parentId format, object type, and existence.\n- Implement robust error handling for invalid, non-existent, or unauthorized parentId values during attachment creation or updates.\n- Consider the impact of parent record lifecycle events (such as deletion or archiving) on associated attachments.\n\n**Examples**\n- \"0011a000003XYZ123\" (Account record ID)\n- \"0031a000004ABC456\" (Contact record ID)\n- \"0061a000005DEF789\" (Opportunity record ID)\n\n**Important notes**\n- The parentId is critical for maintaining data integrity and ensuring"},"contentType":{"type":"string","description":"The MIME type of the attachment content, specifying the exact format and nature of the file (e.g., \"image/png\", \"application/pdf\", \"text/plain\"). This information is critical for correctly interpreting, processing, and displaying the attachment across different systems and platforms. It enables clients and servers to determine how to handle the file, whether to render it inline, prompt for download, or apply specific processing rules. Accurate contentType values facilitate validation, security scanning, and compatibility checks, ensuring that the attachment is managed appropriately throughout its lifecycle. Properly defined contentType values improve interoperability, user experience, and system security by enabling precise content handling and reducing the risk of errors or vulnerabilities.\n\n**Field behavior**\n- Defines the media type of the attachment to guide processing, rendering, and handling decisions.\n- Used by clients, servers, and intermediaries to determine appropriate actions such as inline display, download prompts, or specialized processing.\n- Supports validation of file format during upload, storage, and download operations to ensure data integrity.\n- Influences security measures including filtering, scanning for malware, and enforcing content policies.\n- Affects user interface elements like preview generation, icon selection, and content categorization.\n- May be used in logging, auditing, and analytics to track content types handled by the system.\n\n**Implementation guidance**\n- Must strictly adhere to the standard MIME type format as defined by IANA (e.g., \"type/subtype\").\n- Should accurately reflect the actual content format to prevent misinterpretation or processing errors.\n- Use generic types such as \"application/octet-stream\" only when the specific type cannot be determined.\n- Include optional parameters (e.g., charset, boundary) when relevant to fully describe the content.\n- Ensure the contentType is set or updated whenever the attachment content changes to maintain consistency.\n- Validate the contentType against the file extension and actual content where feasible to enhance reliability.\n- Consider normalizing the contentType to lowercase to maintain consistency across systems.\n- Avoid relying solely on contentType for security decisions; combine with other metadata and content inspection.\n\n**Examples**\n- \"image/jpeg\"\n- \"application/pdf\"\n- \"text/plain; charset=utf-8\"\n- \"application/vnd.ms-excel\"\n- \"audio/mpeg\"\n- \"video/mp4\"\n- \"application/json\"\n- \"multipart/form-data; boundary=something\"\n- \"application/zip\"\n\n**Important notes**\n- An incorrect, missing, or misleading contentType can lead to improper"},"isPrivate":{"type":"boolean","description":"Indicates whether the attachment is designated as private, restricting its visibility exclusively to users who have been explicitly authorized to access it. This setting is critical for maintaining confidentiality and ensuring that sensitive or proprietary information contained within attachments is accessible only to appropriate personnel. When marked as private, the attachment’s visibility is constrained based on the user’s permissions and the sharing settings of the parent record, thereby reinforcing data security within the Salesforce environment. This field directly controls the scope of access to the attachment, dynamically influencing who can view or interact with it, and must be managed carefully to align with organizational privacy policies and compliance requirements.\n\n**Field behavior**\n- When set to true, the attachment is accessible only to users with explicit authorization, effectively hiding it from unauthorized users.\n- When set to false or omitted, the attachment inherits the visibility of its parent record and is accessible to all users who have access to that record.\n- Directly affects the attachment’s visibility scope and access control within Salesforce.\n- Changes to this field immediately impact user access permissions for the attachment.\n- Does not override broader organizational sharing settings but further restricts access.\n- The privacy setting applies in conjunction with existing record-level and object-level security controls.\n\n**Implementation guidance**\n- Use this field to enforce strict data privacy and protect sensitive attachments in compliance with organizational policies and regulatory requirements.\n- Ensure that user roles, profiles, and sharing rules are properly configured to respect this privacy setting.\n- Validate that the value is a boolean (true or false) to prevent data inconsistencies.\n- Evaluate the implications on existing sharing rules and record-level security before modifying this field.\n- Consider implementing audit logging to track changes to this privacy setting for security and compliance purposes.\n- Test changes in a non-production environment to understand the impact on user access before deploying to production.\n- Communicate changes to relevant stakeholders to avoid confusion or unintended access issues.\n\n**Examples**\n- `true` — The attachment is private and visible only to users explicitly authorized to access it.\n- `false` — The attachment is public within the context of the parent record and visible to all users who have access to that record.\n\n**Important notes**\n- Setting this field to true does not grant access to users who lack permission to view the parent record; it only further restricts access.\n- Modifying this setting can significantly affect user access and should be managed carefully to avoid unintended data exposure or access denial.\n- Some Salesforce editions or configurations may have limitations or specific behaviors regarding attachment privacy"},"description":{"type":"string","description":"A textual description providing additional context, detailed information, and clarifying the content, purpose, or relevance of the attachment within Salesforce. This field enhances user understanding and facilitates easier identification, organization, and management of attachments by serving as descriptive metadata that complements the attachment’s core data without containing the attachment content itself. It supports plain text with special characters, punctuation, and full Unicode, enabling expressive, internationalized, and accessible descriptions. Typically displayed in user interfaces alongside attachment details, this description also plays a crucial role in search, filtering, sorting, and reporting operations, improving overall data discoverability and usability.\n\n**Field behavior**\n- Optional and editable field used to describe the attachment’s nature or relevance.\n- Supports plain text input including special characters, punctuation, and Unicode.\n- Displayed prominently in user interfaces where attachment details appear.\n- Utilized in search, filtering, sorting, and reporting functionalities to enhance data retrieval.\n- Does not contain or affect the actual attachment content or file storage.\n\n**Implementation guidance**\n- Encourage concise yet informative descriptions to aid quick identification.\n- Validate and sanitize input to prevent injection attacks and ensure data integrity.\n- Support full Unicode to accommodate international characters and symbols.\n- Enforce a reasonable maximum length (commonly 255 characters) to maintain performance and UI consistency.\n- Avoid inclusion of sensitive or confidential information unless appropriate security controls are applied.\n- Implement trimming or sanitization routines to maintain clean and consistent data.\n\n**Examples**\n- \"Quarterly financial report for Q1 2024.\"\n- \"Signed contract agreement with client XYZ.\"\n- \"Product brochure for the new model launch.\"\n- \"Meeting notes and action items from April 15, 2024.\"\n- \"Technical specification document for project Alpha.\"\n\n**Important notes**\n- This field contains descriptive metadata only, not the attachment content.\n- Sensitive information should be excluded or protected according to security policies.\n- Changes to this field do not impact the attachment file or its storage.\n- Consistent and accurate descriptions improve attachment management and user experience.\n- Plays a key role in enhancing searchability and filtering within Salesforce.\n\n**Dependency chain**\n- Directly associated with the attachment entity in Salesforce.\n- Often used in conjunction with other metadata fields such as attachment name, type, owner, size, and creation date.\n- Supports comprehensive context when combined with related records and metadata.\n\n**Technical details**\n- Data type: string.\n- Maximum length: typically 255 characters, configurable"}},"description":"The attachment property represents a file or document linked to a Salesforce record, encompassing both the binary content of the file and its associated metadata. It serves as a versatile container for a wide variety of file types—including documents, images, PDFs, spreadsheets, and emails—allowing users to enrich Salesforce records such as emails, cases, opportunities, or any standard or custom objects with supplementary information or evidence. This property supports comprehensive lifecycle management, enabling operations such as uploading new files, retrieving existing attachments, updating file details, and deleting attachments as needed. By maintaining a direct association with the parent Salesforce record through identifiers like ParentId, it ensures accurate linkage and seamless integration within the Salesforce ecosystem. The attachment property facilitates enhanced collaboration, documentation, and data completeness by providing a centralized and manageable way to handle files related to Salesforce data.\n\n**Field behavior**\n- Stores the binary content or a reference to a file linked to a specific Salesforce record.\n- Supports multiple file formats including but not limited to documents, images, PDFs, spreadsheets, and email files.\n- Enables attaching additional context, documentation, or evidence to Salesforce objects.\n- Allows full lifecycle management: create, read, update, and delete operations on attachments.\n- Contains metadata such as file name, MIME content type, file size, and the ID of the parent Salesforce record.\n- Reflects changes immediately in the associated Salesforce record’s file attachments.\n- Maintains a direct association with the parent record through the ParentId field to ensure accurate linkage.\n- Supports integration with Salesforce Files (ContentVersion and ContentDocument) for advanced file management features.\n\n**Implementation guidance**\n- Encode file content using Base64 encoding when transmitting via Salesforce APIs to ensure data integrity.\n- Validate file size against Salesforce limits (commonly up to 25 MB per attachment) and confirm supported file types before upload.\n- Use the ParentId field to correctly associate the attachment with the intended Salesforce record.\n- Implement robust permission checks to ensure only authorized users can upload, view, or modify attachments.\n- Consider leveraging Salesforce Files (ContentVersion and ContentDocument objects) for enhanced file management capabilities, versioning, and sharing features, especially for new implementations.\n- Handle error responses gracefully, including those related to file size limits, unsupported formats, or permission issues.\n- Ensure metadata fields such as file name and content type are accurately populated to facilitate proper handling and display.\n- When updating attachments, confirm that the new content or metadata aligns with organizational policies and compliance requirements.\n- Optimize performance"},"contentVersion":{"type":"object","properties":{"contentDocumentId":{"type":"string","description":"contentDocumentId is the unique identifier for the parent Content Document associated with a specific Content Version in Salesforce. It serves as the fundamental reference that links all versions of the same document, enabling comprehensive version control and efficient document management within the Salesforce ecosystem. This ID is immutable once the Content Version is created, ensuring a stable and consistent connection across all iterations of the document. By maintaining this linkage, Salesforce facilitates seamless navigation between individual document versions and their overarching document entity, supporting streamlined retrieval, updates, collaboration, and organizational workflows. This field is automatically assigned by Salesforce during the creation of a Content Version and is critical for maintaining the integrity and traceability of document versions throughout their lifecycle.\n\n**Field behavior**\n- Represents the unique ID of the parent Content Document to which the Content Version belongs.\n- Acts as the primary link connecting multiple versions of the same document.\n- Immutable after the Content Version record is created, preserving data integrity.\n- Enables navigation from a specific Content Version to its parent Content Document.\n- Automatically assigned and managed by Salesforce during Content Version creation.\n- Essential for maintaining consistent version histories and document relationships.\n\n**Implementation guidance**\n- Use this ID to reference or retrieve the entire document across all its versions.\n- Verify that the ID corresponds to an existing Content Document before usage.\n- Utilize in SOQL queries and API calls to associate Content Versions with their parent document.\n- Avoid modifying this field directly; it is managed internally by Salesforce.\n- Employ this ID to support document lifecycle management, version tracking, and collaboration features.\n- Leverage this field to ensure accurate linkage in integrations and custom development.\n\n**Examples**\n- \"0691t00000XXXXXXAAA\"\n- \"0692a00000YYYYYYBBB\"\n- \"0693b00000ZZZZZZCCC\"\n\n**Important notes**\n- Distinct from the Content Version ID, which identifies a specific version rather than the entire document.\n- Mandatory for any valid Content Version record to ensure proper document association.\n- Cannot be null or empty for Content Version records.\n- Critical for preserving document integrity, version tracking, and enabling collaborative document management.\n- Changes to this ID are not permitted post-creation to avoid breaking version linkages.\n- Plays a key role in Salesforce’s document sharing, access control, and audit capabilities.\n\n**Dependency chain**\n- Requires the existence of a valid Content Document record in Salesforce.\n- Directly related to Content Version records representing different iterations of the same document.\n- Integral"},"title":{"type":"string","description":"The title of the content version serves as the primary name or label assigned to this specific iteration of content within Salesforce. It acts as a key identifier that enables users to quickly recognize, differentiate, and manage various versions of content. The title typically summarizes the subject matter, purpose, or significant details about the content, making it easier to locate and understand at a glance. It should be concise yet sufficiently descriptive to effectively convey the essence and context of the content version. Well-crafted titles improve navigation, searchability, and version tracking within content management workflows, enhancing overall user experience and operational efficiency.\n\n**Field behavior**\n- Functions as the main identifier or label for the content version in user interfaces, listings, and reports.\n- Enables users to distinguish between different versions of the same content efficiently.\n- Reflects the content’s subject, purpose, key attributes, or version status.\n- Should be concise but informative to facilitate quick recognition and comprehension.\n- Commonly displayed in search results, filters, version histories, and content libraries.\n- Supports sorting and filtering operations to streamline content management.\n\n**Implementation guidance**\n- Use clear, descriptive titles that accurately represent the content version’s focus or changes.\n- Avoid special characters or formatting that could cause display, parsing, or processing issues.\n- Keep titles within a reasonable length (typically up to 255 characters) to ensure readability across various UI components and devices.\n- Update the title appropriately when creating new versions to reflect significant updates or revisions.\n- Consider including version numbers, dates, or status indicators to enhance clarity and version tracking.\n- Follow organizational naming conventions and standards to maintain consistency.\n\n**Examples**\n- \"Q2 Financial Report 2024\"\n- \"Product Launch Presentation v3\"\n- \"Employee Handbook - Updated March 2024\"\n- \"Marketing Strategy Overview\"\n- \"Website Redesign Proposal - Final Draft\"\n\n**Important notes**\n- Titles do not need to be globally unique but should be meaningful and contextually relevant.\n- Changing the title does not modify the underlying content data or its integrity.\n- Titles are frequently used in search, filtering, sorting, and reporting operations within Salesforce.\n- Consistent naming conventions and formatting improve content management efficiency and user experience.\n- Consider organizational standards or guidelines when defining title formats.\n- Titles should avoid ambiguity to prevent confusion between similar content versions.\n\n**Dependency chain**\n- Part of the ContentVersion object within Salesforce’s content management framework.\n- Often associated with other metadata fields such as description"},"pathOnClient":{"type":"string","description":"The pathOnClient property specifies the original file path of the content as it exists on the client machine before being uploaded to Salesforce. This path serves as a precise reference to the source location of the file on the user's local system, facilitating identification, auditing, and tracking of the file's origin. It captures either the full absolute path or a relative path depending on the client environment and upload context, accurately reflecting the exact location from which the file was sourced. This information is primarily used for informational, diagnostic, and user interface purposes within Salesforce and does not influence how the file is stored, accessed, or secured in the system.\n\n**Field behavior**\n- Represents the full absolute or relative file path on the client device where the file was stored prior to upload.\n- Automatically populated during file upload processes when the client environment provides this information.\n- Used mainly for reference, auditing, and user interface display to provide context about the file’s origin.\n- May be empty, null, or omitted if the file was not uploaded from a client device or if the path information is unavailable or restricted.\n- Does not influence file storage, retrieval, or access permissions within Salesforce.\n- Retains the original formatting and syntax as provided by the client system at upload time.\n\n**Implementation guidance**\n- Capture the file path exactly as provided by the client system at the time of upload, preserving the original format and case sensitivity.\n- Handle differences in file path syntax across operating systems (e.g., backslashes for Windows, forward slashes for Unix/Linux/macOS).\n- Treat this property strictly as metadata for informational purposes; avoid using it for security, access control, or file retrieval logic.\n- Sanitize or mask sensitive directory or user information when displaying or logging this path to protect user privacy and comply with data protection regulations.\n- Ensure compliance with relevant privacy laws and organizational policies when storing or exposing client file paths.\n- Consider truncating, normalizing, or encoding paths if necessary to conform to platform length limits, display constraints, or to prevent injection vulnerabilities.\n- Provide clear documentation to users and administrators about the non-security nature of this property to prevent misuse.\n\n**Examples**\n- \"C:\\\\Users\\\\JohnDoe\\\\Documents\\\\Report.pdf\"\n- \"/home/janedoe/projects/presentation.pptx\"\n- \"Documents/Invoices/Invoice123.pdf\"\n- \"Desktop/Project Files/DesignMockup.png\"\n- \"../relative/path/to/file.txt\"\n\n**Important notes**\n- This property does not"},"tagCsv":{"type":"string","description":"A comma-separated string representing the tags associated with the content version in Salesforce. This field enables the assignment of multiple descriptive tags within a single string, facilitating efficient categorization, organization, and enhanced searchability of the content. Tags serve as metadata labels that help users filter, locate, and manage content versions effectively across the platform by grouping related items and supporting advanced search and filtering capabilities. Proper use of this field improves content discoverability, streamlines content management workflows, and supports dynamic content classification.\n\n**Field behavior**\n- Accepts multiple tags separated by commas, typically without spaces, though trimming whitespace is recommended.\n- Tags are used to categorize, organize, and improve the discoverability of content versions.\n- Tags can be added, updated, or removed by modifying the entire string value.\n- Duplicate tags within the string should be avoided to maintain clarity and effectiveness.\n- Tags are generally treated as case-insensitive for search purposes but stored exactly as entered.\n- Changes to this field immediately affect how content versions are indexed and retrieved in search operations.\n\n**Implementation guidance**\n- Ensure individual tags do not contain commas to prevent parsing errors.\n- Validate the tagCsv string for invalid characters, excessive length, and proper formatting.\n- When updating tags, replace the entire string to accurately reflect the current tag set.\n- Trim leading and trailing whitespace from each tag before processing or storage.\n- Utilize this field to enhance search filters, categorization, and content management workflows.\n- Implement application-level checks to prevent duplicate or meaningless tags.\n- Consider standardizing tag formats (e.g., lowercase) to maintain consistency across records.\n\n**Examples**\n- \"marketing,Q2,presentation\"\n- \"urgent,clientA,confidential\"\n- \"releaseNotes,v1.2,approved\"\n- \"finance,yearEnd,reviewed\"\n\n**Important notes**\n- The maximum length of the tagCsv string is subject to Salesforce field size limitations.\n- Tags should be meaningful, consistent, and standardized to maximize their utility.\n- Modifications to tags can impact content visibility and access if used in filtering or permission logic.\n- This field does not enforce uniqueness of tags; application-level logic should handle duplicate prevention.\n- Tags are stored as-is; any normalization or case conversion must be handled externally.\n- Improper formatting or invalid characters may cause errors or unexpected behavior in search and filtering.\n\n**Dependency chain**\n- Closely related to contentVersion metadata and search/filtering functionality.\n- May integrate with Salesforce"},"contentLocation":{"type":"string","description":"contentLocation specifies the precise storage location of the content file associated with the Salesforce ContentVersion record. It identifies where the actual file data resides—whether within Salesforce's internal storage infrastructure, an external content management system, or a linked external repository—and governs how the content is accessed, retrieved, and managed throughout its lifecycle. This field plays a crucial role in content delivery, access permissions, and integration workflows by clearly indicating the source location of the file data. Properly setting and interpreting contentLocation ensures that content is handled correctly according to its storage context, enabling seamless access when stored internally or facilitating robust integration and synchronization with third-party external systems.\n\n**Field behavior**\n- Determines the origin and storage context of the content file linked to the ContentVersion record.\n- Influences the mechanisms and protocols used for accessing, retrieving, and managing the content.\n- Typically assigned automatically by Salesforce based on the upload method or integration configuration.\n- Common values include 'S' for Salesforce internal storage, 'E' for external storage systems, and 'L' for linked external locations.\n- Often read-only to preserve data integrity and prevent unauthorized or unintended modifications.\n- Alterations to this field can impact content visibility, sharing settings, and delivery workflows.\n\n**Implementation guidance**\n- Utilize this field to programmatically distinguish between internally stored content and externally managed files.\n- When integrating with external repositories, ensure contentLocation accurately reflects the external storage to enable correct content handling and retrieval.\n- Avoid manual updates unless supported by Salesforce documentation and accompanied by a comprehensive understanding of the consequences.\n- Validate the contentLocation value prior to processing or delivering content to guarantee proper routing and access control.\n- Confirm that external storage systems are correctly configured, authenticated, and authorized to maintain uninterrupted access.\n- Monitor this field closely during content migration or integration activities to prevent data inconsistencies or access disruptions.\n\n**Examples**\n- 'S' — Content physically stored within Salesforce’s internal storage infrastructure.\n- 'E' — Content residing in an external system integrated with Salesforce, such as a third-party content repository.\n- 'L' — Content located in a linked external location, representing specialized or less common storage configurations.\n\n**Important notes**\n- Manual modifications to contentLocation can lead to broken links, access failures, or data inconsistencies.\n- Salesforce generally manages this field automatically to uphold consistency and reliability.\n- The contentLocation value directly influences content delivery methods and access control policies.\n- External content locations require proper configuration, authentication, and permissions to function correctly"}},"description":"The contentVersion property serves as the unique identifier for a specific iteration of a content item within the Salesforce platform's content management system. It enables precise tracking, retrieval, and management of individual versions or revisions of documents, files, or digital assets. Each contentVersion corresponds to a distinct, immutable state of the content at a given point in time, facilitating robust version control, content integrity, and auditability. When a content item is created or modified, Salesforce automatically generates a new contentVersion to capture those changes, allowing users and systems to access, compare, or revert to previous versions as necessary. This property is fundamental for workflows involving content updates, approvals, compliance tracking, and historical content analysis.\n\n**Field behavior**\n- Uniquely identifies a single, immutable version of a content item within Salesforce.\n- Differentiates between multiple revisions or updates of the same content asset.\n- Automatically created or incremented by Salesforce upon content creation or modification.\n- Used in API operations to retrieve, reference, update metadata, or manage specific content versions.\n- Remains constant once assigned; any content changes result in a new contentVersion record.\n- Supports audit trails by preserving historical versions of content.\n\n**Implementation guidance**\n- Ensure synchronization of contentVersion values with Salesforce to maintain consistency across integrated systems.\n- Utilize this property for version-specific operations such as downloading content, updating metadata, or deleting particular versions.\n- Validate contentVersion identifiers against Salesforce API standards to avoid errors or mismatches.\n- Implement comprehensive error handling for scenarios where contentVersion records are missing, deprecated, or inaccessible due to permission restrictions.\n- Consider access control differences at the version level, as visibility and permissions may vary between versions.\n- Leverage contentVersion in workflows requiring version comparison, rollback, or approval processes.\n\n**Examples**\n- \"0681t000000XyzA\" (initial version of a document)\n- \"0681t000000XyzB\" (a subsequent, updated version reflecting content changes)\n- \"0681t000000XyzC\" (a further revised version incorporating additional edits)\n\n**Important notes**\n- contentVersion is distinct from contentDocumentId; contentVersion identifies a specific version, whereas contentDocumentId refers to the overall document entity.\n- Proper versioning is critical for maintaining content integrity, supporting audit trails, and ensuring regulatory compliance.\n- Access permissions and visibility can differ between content versions, potentially affecting user access and operations.\n- The contentVersion ID typically begins with the prefix"}}},"File":{"type":"object","description":"**CRITICAL: This object is REQUIRED for all file-based import adaptor types.**\n\n**When to include this object**\n\n✅ **MUST SET** when `adaptorType` is one of:\n- `S3Import`\n- `FTPImport`\n- `AS2Import`\n\n❌ **DO NOT SET** for non-file-based imports like:\n- `SalesforceImport`\n- `NetSuiteImport`\n- `HTTPImport`\n- `MongodbImport`\n- `RDBMSImport`\n\n**Minimum required fields**\n\nFor most file imports, you need at minimum:\n- `fileName`: The output file name (supports Handlebars like `{{timestamp}}`)\n- `type`: The file format (json, csv, xml, xlsx)\n- `skipAggregation`: Usually `false` for standard imports\n\n**Example (S3 Import)**\n\n```json\n{\n  \"file\": {\n    \"fileName\": \"customers-{{timestamp}}.json\",\n    \"skipAggregation\": false,\n    \"type\": \"json\"\n  },\n  \"s3\": {\n    \"region\": \"us-east-1\",\n    \"bucket\": \"my-bucket\",\n    \"fileKey\": \"customers-{{timestamp}}.json\"\n  },\n  \"adaptorType\": \"S3Import\"\n}\n```","properties":{"fileName":{"type":"string","description":"**REQUIRED for file-based imports.**\n\nThe name of the file to be created/written. Supports Handlebars expressions for dynamic naming.\n\n**Default behavior**\n- When the user does not specify a file naming convention, **always default to timestamped filenames** using `{{timestamp}}` (e.g., `\"items-{{timestamp}}.csv\"`). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Common patterns**\n- `\"data-{{timestamp}}.json\"` - Timestamped JSON file (DEFAULT — use this pattern when not specified)\n- `\"export-{{date}}.csv\"` - Date-stamped CSV file\n- `\"{{recordType}}-backup.xml\"` - Dynamic record type naming\n\n**Examples**\n- `\"customers-{{timestamp}}.json\"`\n- `\"orders-export.csv\"`\n- `\"inventory-{{date}}.xlsx\"`\n\n**Important**\n- For S3 imports, this should typically match the `s3.fileKey` value\n- For FTP imports, this should typically match the `ftp.fileName` value"},"skipAggregation":{"type":"boolean","description":"Controls whether records are aggregated into a single file or processed individually.\n\n**Values**\n- `false` (DEFAULT): Records are aggregated into a single output file\n- `true`: Each record is written as a separate file\n\n**When to use**\n- **`false`**: Standard batch imports where all records go into one file\n- **`true`**: When each record needs its own file (e.g., individual documents)\n\n**Default:** `false`","default":false},"type":{"type":"string","enum":["json","csv","xml","xlsx","filedefinition"],"description":"**REQUIRED for file-based imports.**\n\nThe format of the output file.\n\n**Options**\n- `\"json\"`: JSON format (most common for API data)\n- `\"csv\"`: Comma-separated values (tabular data)\n- `\"xml\"`: XML format\n- `\"xlsx\"`: Excel spreadsheet format\n- `\"filedefinition\"`: Custom file definition format\n\n**Selection guidance**\n- Use `\"json\"` for structured/nested data, API payloads\n- Use `\"csv\"` for flat tabular data, spreadsheet-like records\n- Use `\"xml\"` for XML-based integrations, SOAP services\n- Use `\"xlsx\"` for Excel-compatible outputs"},"encoding":{"type":"string","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"],"description":"Character encoding for the output file.\n\n**Default:** `\"utf8\"`\n\n**When to change**\n- `\"win1252\"`: Legacy Windows systems\n- `\"utf-16le\"`: Unicode with BOM requirements\n- `\"gb18030\"`: Chinese character sets\n- `\"shiftjis\"`: Japanese character sets","default":"utf8"},"delete":{"type":"boolean","description":"Whether to delete the source file after successful import.\n\n**Values**\n- `true`: Delete source file after processing\n- `false`: Keep source file\n\n**Default:** `false`","default":false},"compressionFormat":{"type":"string","enum":["gzip","zip"],"description":"Compression format for the output file.\n\n**Options**\n- `\"gzip\"`: GZIP compression (.gz)\n- `\"zip\"`: ZIP compression (.zip)\n\n**When to use**\n- Large files that benefit from compression\n- When target system expects compressed files"},"backupPath":{"type":"string","description":"Path where backup copies of files should be stored.\n\n**Examples**\n- `\"backup/\"` - Relative backup folder\n- `\"/archive/2024/\"` - Absolute backup path"},"purgeInternalBackup":{"type":"boolean","description":"Whether to purge internal backup copies after successful processing.\n\n**Default:** `false`","default":false},"batchSize":{"type":"integer","description":"Number of records to include per batch/file when processing large datasets.\n\n**When to use**\n- Large imports that need to be split into multiple files\n- When target system has file size limitations\n\n**Note**\n- Only applicable when `skipAggregation` is `true`"},"encrypt":{"type":"boolean","description":"Whether to encrypt the output file.\n\n**Values**\n- `true`: Encrypt the file (requires PGP configuration)\n- `false`: No encryption\n\n**Default:** `false`","default":false},"csv":{"type":"object","description":"CSV-specific configuration. Only used when `type` is `\"csv\"`.","properties":{"rowDelimiter":{"type":"string","description":"Character(s) used to separate rows. Default is newline.","default":"\n"},"columnDelimiter":{"type":"string","description":"Character(s) used to separate columns. Default is comma.","default":","},"includeHeader":{"type":"boolean","description":"Whether to include header row with column names.","default":true},"wrapWithQuotes":{"type":"boolean","description":"Whether to wrap field values in quotes.","default":false},"replaceTabWithSpace":{"type":"boolean","description":"Replace tab characters with spaces.","default":false},"replaceNewlineWithSpace":{"type":"boolean","description":"Replace newline characters with spaces within fields.","default":false},"truncateLastRowDelimiter":{"type":"boolean","description":"Remove trailing row delimiter from the file.","default":false}}},"json":{"type":"object","description":"JSON-specific configuration. Only used when `type` is `\"json\"`.","properties":{"resourcePath":{"type":"string","description":"JSONPath expression to locate records within the JSON structure."}}},"xml":{"type":"object","description":"XML-specific configuration. Only used when `type` is `\"xml\"`.","properties":{"resourcePath":{"type":"string","description":"XPath expression to locate records within the XML structure."}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"Reference to the file definition resource."}}}}},"FileSystem":{"type":"object","description":"Configuration for FileSystem imports","properties":{"directoryPath":{"type":"string","description":"Directory path to read files from (required)"}},"required":["directoryPath"]},"AiAgent":{"type":"object","description":"AI Agent configuration used by both AiAgentImport and GuardrailImport (ai_agent type).\n\nConfigures which AI provider and model to use, along with instructions, parameter\ntuning, output format, and available tools. Two providers are supported:\n\n- **openai**: OpenAI models (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)\n- **gemini**: Google Gemini models via LiteLLM proxy\n\nA `_connectionId` is optional (BYOK). If not provided on the parent import,\nplatform-managed credentials are used.\n","properties":{"provider":{"type":"string","enum":["openai","gemini"],"default":"openai","description":"AI provider to use.\n\n- **openai**: Uses OpenAI Responses API. Configure via the `openai` object.\n- **gemini**: Uses Google Gemini via LiteLLM. Configure via `litellm` with\n  Gemini-specific overrides in `litellm._overrides.gemini`.\n"},"openai":{"type":"object","description":"OpenAI-specific configuration. Used when `provider` is \"openai\".\n","properties":{"instructions":{"type":"string","maxLength":51200,"description":"System prompt that defines the AI agent's behavior, goals, and constraints.\nMaximum 50 KB.\n"},"model":{"type":"string","description":"OpenAI model identifier.\n"},"reasoning":{"type":"object","description":"Controls depth of reasoning for complex tasks.\n","properties":{"effort":{"type":"string","enum":["minimal","low","medium","high"],"description":"How much reasoning effort the model should invest"},"summary":{"type":"string","enum":["concise","auto","detailed"],"description":"Level of detail in reasoning summaries"}}},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature. Higher values (e.g. 1.5) produce more creative output,\nlower values (e.g. 0.2) produce more focused and deterministic output.\n"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"maxOutputTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the model's response"},"serviceTier":{"type":"string","enum":["auto","default","priority"],"default":"default","description":"OpenAI service tier. \"priority\" provides higher rate limits and\nlower latency at increased cost.\n"},"output":{"type":"object","description":"Output format configuration","properties":{"format":{"type":"object","description":"Controls the structure of the model's output.\n","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text","description":"Output format type.\n\n- **text**: Free-form text response\n- **json_schema**: Structured JSON conforming to a schema\n- **blob**: Binary data output\n"},"name":{"type":"string","description":"Name for the output format (used with json_schema)"},"strict":{"type":"boolean","default":false,"description":"Whether to enforce strict schema validation on output"},"jsonSchema":{"type":"object","description":"JSON Schema for structured output. Required when `format.type` is \"json_schema\".\n","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalproperties":{"type":"boolean"}}}}},"verbose":{"type":"string","enum":["low","medium","high"],"default":"medium","description":"Level of detail in the model's response"}}},"tools":{"type":"array","description":"Tools available to the AI agent during processing.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["web_search","mcp","image_generation","tool"],"description":"Type of tool.\n\n- **web_search**: Search the web for information\n- **mcp**: Connect to an MCP server for additional tools\n- **image_generation**: Generate images\n- **tool**: Reference a Celigo Tool resource\n"},"webSearch":{"type":"object","description":"Web search configuration (empty object to enable)"},"imageGeneration":{"type":"object","description":"Image generation configuration","properties":{"background":{"type":"string","enum":["transparent","opaque"]},"quality":{"type":"string","enum":["low","medium","high"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"]},"outputFormat":{"type":"string","enum":["png","webp","jpeg"]}}},"mcp":{"type":"object","description":"MCP server tool configuration","properties":{"_mcpConnectionId":{"type":"string","format":"objectId","description":"Connection to the MCP server"},"allowedTools":{"type":"array","description":"Specific tools to allow from the MCP server (all if omitted)","items":{"type":"string"}}}},"tool":{"type":"object","description":"Reference to a Celigo Tool resource.\n","properties":{"_toolId":{"type":"string","format":"objectId","description":"Reference to the Tool resource"},"overrides":{"type":"object","description":"Per-agent overrides for the tool's internal resources"}}}}}}}},"litellm":{"type":"object","description":"LiteLLM proxy configuration. Used when `provider` is \"gemini\".\n\nLiteLLM provides a unified interface to multiple AI providers.\nGemini-specific settings are in `_overrides.gemini`.\n","properties":{"model":{"type":"string","description":"LiteLLM model identifier"},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature"},"maxCompletionTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the response"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"seed":{"type":"number","description":"Random seed for reproducible outputs"},"responseFormat":{"type":"object","description":"Output format configuration","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text"},"name":{"type":"string"},"strict":{"type":"boolean","default":false},"jsonSchema":{"type":"object","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalProperties":{"type":"boolean"}}}}},"_overrides":{"type":"object","description":"Provider-specific overrides","properties":{"gemini":{"type":"object","description":"Gemini-specific configuration overrides.\n","properties":{"systemInstruction":{"type":"string","maxLength":51200,"description":"System instruction for Gemini models. Equivalent to OpenAI's `instructions`.\nMaximum 50 KB.\n"},"tools":{"type":"array","description":"Gemini-specific tools","items":{"type":"object","properties":{"type":{"type":"string","enum":["googleSearch","urlContext","fileSearch","mcp","tool"],"description":"Type of Gemini tool.\n\n- **googleSearch**: Google Search grounding\n- **urlContext**: URL content retrieval\n- **fileSearch**: Search uploaded files\n- **mcp**: Connect to an MCP server\n- **tool**: Reference a Celigo Tool resource\n"},"googleSearch":{"type":"object","description":"Google Search configuration (empty object to enable)"},"urlContext":{"type":"object","description":"URL context configuration (empty object to enable)"},"fileSearch":{"type":"object","properties":{"fileSearchStoreNames":{"type":"array","items":{"type":"string"}}}},"mcp":{"type":"object","properties":{"_mcpConnectionId":{"type":"string","format":"objectId"},"allowedTools":{"type":"array","items":{"type":"string"}}}},"tool":{"type":"object","properties":{"_toolId":{"type":"string","format":"objectId"},"overrides":{"type":"object"}}}}}},"responseModalities":{"type":"array","description":"Response output modalities","items":{"type":"string","enum":["text","image"]},"default":["text"]},"topK":{"type":"number","description":"Top-K sampling parameter for Gemini"},"thinkingConfig":{"type":"object","description":"Controls Gemini's extended thinking capabilities","properties":{"includeThoughts":{"type":"boolean","description":"Whether to include thinking steps in the response"},"thinkingBudget":{"type":"number","minimum":100,"maximum":4000,"description":"Maximum tokens allocated for thinking"},"thinkingLevel":{"type":"string","enum":["minimal","low","medium","high"]}}},"imageConfig":{"type":"object","description":"Gemini image generation configuration","properties":{"aspectRatio":{"type":"string","enum":["1:1","2:3","3:2","3:4","4:3","4:5","5:4","9:16","16:9","21:9"]},"imageSize":{"type":"string","enum":["1K","2K","4k"]}}},"mediaResolution":{"type":"string","enum":["low","medium","high"],"description":"Resolution for media inputs (images, video)"}}}}}}}}},"Guardrail":{"type":"object","description":"Configuration for GuardrailImport adaptor type.\n\nGuardrails are safety and compliance checks that can be applied to data\nflowing through integrations. Three types of guardrails are supported:\n\n- **ai_agent**: Uses an AI model to evaluate data against custom instructions\n- **pii**: Detects and optionally masks personally identifiable information\n- **moderation**: Checks content against moderation categories (e.g. `hate`, `violence`, `harassment`, `sexual`, `self_harm`, `illicit`). Use category names exactly as listed in `moderation.categories` — e.g. `hate` (not \"hate speech\"), `violence` (not \"violent content\").\n\nGuardrail imports do not require a `_connectionId` (unless using BYOK for `ai_agent` type).\n","properties":{"type":{"type":"string","enum":["ai_agent","pii","moderation"],"description":"The type of guardrail to apply.\n\n- **ai_agent**: Evaluate data using an AI model with custom instructions.\n  Requires the `aiAgent` sub-configuration.\n- **pii**: Detect personally identifiable information in data.\n  Requires the `pii` sub-configuration with at least one entity type.\n- **moderation**: Check content for harmful categories.\n  Requires the `moderation` sub-configuration with at least one category.\n"},"confidenceThreshold":{"type":"number","minimum":0,"maximum":1,"default":0.7,"description":"Confidence threshold for guardrail detection (0 to 1).\n\nOnly detections with confidence at or above this threshold will be flagged.\nLower values catch more potential issues but may increase false positives.\n"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"pii":{"type":"object","description":"Configuration for PII (Personally Identifiable Information) detection.\n\nRequired when `guardrail.type` is \"pii\". At least one entity type must be specified.\n","properties":{"entities":{"type":"array","description":"PII entity types to detect in the data.\n\nAt least one entity must be specified when using PII guardrails.\n","items":{"type":"string","enum":["credit_card_number","card_security_code_cvv_cvc","cryptocurrency_wallet_address","date_and_time","email_address","iban_code","bic_swift_bank_identifier_code","ip_address","location","medical_license_number","national_registration_number","persons_name","phone_number","url","us_bank_account_number","us_drivers_license","us_itin","us_passport_number","us_social_security_number","uk_nhs_number","uk_national_insurance_number","spanish_nif","spanish_nie","italian_fiscal_code","italian_drivers_license","italian_vat_code","italian_passport","italian_identity_card","polish_pesel","finnish_personal_identity_code","singapore_nric_fin","singapore_uen","australian_abn","australian_acn","australian_tfn","australian_medicare","indian_pan","indian_aadhaar","indian_vehicle_registration","indian_voter_id","indian_passport","korean_resident_registration_number"]}},"mask":{"type":"boolean","default":false,"description":"Whether to mask detected PII values in the output.\n\nWhen true, detected PII is replaced with masked values.\nWhen false, PII is only flagged without modification.\n"}}},"moderation":{"type":"object","description":"Configuration for content moderation.\n\nRequired when `guardrail.type` is \"moderation\". At least one category must be specified.\n","properties":{"categories":{"type":"array","description":"Content moderation categories to check for.\n\nAt least one category must be specified when using moderation guardrails.\n","items":{"type":"string","enum":["sexual","sexual_minors","hate","hate_threatening","harassment","harassment_threatening","self_harm","self_harm_intent","self_harm_instructions","violence","violence_graphic","illicit","illicit_violent"]}}}}},"required":["type"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/imports/{_id}":{"get":{"summary":"Get an import","description":"Returns the complete configuration of a specific import.\n","operationId":"getImportById","tags":["Imports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the import","required":true,"schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Import retrieved successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
````

## Update an import

> Updates an existing import with the provided configuration.\
> This is used for major updates to an import's structure or behavior.<br>

````json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"Request":{"type":"object","description":"Configuration for import","properties":{"_connectionId":{"type":"string","format":"objectId","description":"A unique identifier representing the specific connection instance within the system. This ID is used to track, manage, and reference the connection throughout its lifecycle, ensuring accurate routing and association of messages or actions to the correct connection context.\n\n**Field behavior**\n- Uniquely identifies a single connection session.\n- Remains consistent for the duration of the connection.\n- Used to correlate requests, responses, and events related to the connection.\n- Typically generated by the system upon connection establishment.\n\n**Implementation guidance**\n- Ensure the connection ID is globally unique within the scope of the application.\n- Use a format that supports easy storage and retrieval, such as UUID or a similar unique string.\n- Avoid exposing sensitive information within the connection ID.\n- Validate the connection ID format on input to prevent injection or misuse.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"conn-9876543210\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- The connection ID should not be reused for different connections.\n- It is critical for maintaining session integrity and security.\n- Should be treated as an opaque value by clients; its internal structure should not be assumed.\n\n**Dependency chain**\n- Generated during connection initialization.\n- Used by session management and message routing components.\n- Referenced in logs, audits, and debugging tools.\n\n**Technical details**\n- Typically implemented as a string data type.\n- May be generated using UUID libraries or custom algorithms.\n- Stored in connection metadata and passed in API headers or payloads as needed."},"_integrationId":{"type":"string","format":"objectId","description":"The unique identifier for the integration instance associated with the current operation or resource. This ID is used to reference and manage the specific integration within the system, ensuring that actions and data are correctly linked to the appropriate integration context.\n\n**Field behavior**\n- Uniquely identifies an integration instance.\n- Used to associate requests or resources with a specific integration.\n- Typically immutable once set to maintain consistent linkage.\n- May be required for operations involving integration-specific data or actions.\n\n**Implementation guidance**\n- Ensure the ID is generated in a globally unique manner to avoid collisions.\n- Validate the format and existence of the integration ID before processing requests.\n- Use this ID to fetch or update integration-related configurations or data.\n- Secure the ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"intg_1234567890abcdef\"\n- \"integration-987654321\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- This ID should not be confused with user IDs or other resource identifiers.\n- It is critical for maintaining the integrity of integration-related workflows.\n- Changes to this ID after creation can lead to inconsistencies or errors.\n\n**Dependency chain**\n- Dependent on the integration creation or registration process.\n- Used by downstream services or components that interact with the integration.\n- May be referenced in logs, audit trails, and monitoring systems.\n\n**Technical details**\n- Typically represented as a string.\n- May follow a specific pattern or prefix to denote integration IDs.\n- Stored in databases or configuration files linked to integration metadata.\n- Used as a key in API calls, database queries, and internal processing logic."},"_connectorId":{"type":"string","format":"objectId","description":"The unique identifier for the connector associated with the resource or operation. This ID is used to reference and manage the specific connector within the system, enabling integration and communication between different components or services.\n\n**Field behavior**\n- Uniquely identifies a connector instance.\n- Used to link resources or operations to a specific connector.\n- Typically immutable once assigned.\n- Required for operations that involve connector-specific actions.\n\n**Implementation guidance**\n- Ensure the connector ID is globally unique within the system.\n- Validate the format and existence of the connector ID before processing.\n- Use consistent naming or ID conventions to facilitate management.\n- Secure the connector ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"conn-12345\"\n- \"connector_abcde\"\n- \"uuid-550e8400-e29b-41d4-a716-446655440000\"\n\n**Important notes**\n- The connector ID must correspond to a valid and active connector.\n- Changes to the connector ID may affect linked resources or workflows.\n- Connector IDs are often generated by the system and should not be manually altered.\n\n**Dependency chain**\n- Dependent on the existence of a connector registry or management system.\n- Used by components that require connector-specific configuration or data.\n- May be referenced by authentication, routing, or integration modules.\n\n**Technical details**\n- Typically represented as a string.\n- May follow UUID, GUID, or custom ID formats.\n- Stored and transmitted as part of API requests and responses.\n- Should be indexed in databases for efficient lookup."},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this import's operations.\n\nThis field determines:\n- Which connection types are compatible with this import\n- Which API endpoints and protocols will be used\n- Which import-specific configuration objects must be provided\n- The available features and capabilities of the import\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPImport\" for generic REST/SOAP APIs\n- \"SalesforceImport\" for Salesforce-specific operations\n- \"NetSuiteDistributedImport\" for NetSuite-specific operations\n- \"FTPImport\" for file transfers via FTP/SFTP\n\nWhen creating an import, this field must be set correctly and cannot be changed afterward\nwithout creating a new import resource.\n\n**Important notes**\n- When using a specific adapter type (e.g., \"SalesforceImport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n- HTTPImport is the default adapter type for all REST/SOAP APIs and should be used unless there is a specialized adapter type for the specific API (such as SalesforceImport for Salesforce and NetSuiteDistributedImport for NetSuite).\n- WrapperImport is used for using a custom adapter built outside of Celigo, and is vary rarely used.\n- RDBMSImport is used for database imports that are built into Celigo such as SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, etc. When using RDBMSImport, populate the \"rdbms\" configuration object.\n- JDBCImport is ONLY for generic JDBC connections that are not one of the built-in RDBMS types. Do NOT use JDBCImport for Snowflake, SQL Server, MySQL, PostgreSQL, or Oracle — use RDBMSImport instead. When using JDBCImport, populate the \"jdbc\" configuration object.\n- NetSuiteDistributedImport is the default for all NetSuite imports. Always use NetSuiteDistributedImport when importing into NetSuite.\n- NetSuiteImport is a legacy adaptor for NetSuite accounts without the Celigo SuiteApp 2.0. Do NOT use NetSuiteImport unless explicitly requested.\n\nThe connection type must be compatible with the adaptorType specified for this import.\nFor example, if adaptorType is \"HTTPImport\", _connectionId must reference a connection with type \"http\".\n","enum":["HTTPImport","FTPImport","AS2Import","S3Import","NetSuiteImport","NetSuiteDistributedImport","SalesforceImport","JDBCImport","RDBMSImport","MongodbImport","DynamodbImport","WrapperImport","AiAgentImport","GuardrailImport","FileSystemImport","ToolImport"]},"externalId":{"type":"string","description":"A unique identifier assigned to an entity by an external system or source, used to reference or correlate the entity across different platforms or services. This ID facilitates integration, synchronization, and data exchange between systems by providing a consistent reference point.\n\n**Field behavior**\n- Must be unique within the context of the external system.\n- Typically immutable once assigned to ensure consistent referencing.\n- Used to link or map records between internal and external systems.\n- May be optional or required depending on integration needs.\n\n**Implementation guidance**\n- Ensure the externalId format aligns with the external system’s specifications.\n- Validate uniqueness to prevent conflicts or data mismatches.\n- Store and transmit the externalId securely to maintain data integrity.\n- Consider indexing this field for efficient lookup and synchronization.\n\n**Examples**\n- \"12345-abcde-67890\"\n- \"EXT-2023-0001\"\n- \"user_987654321\"\n- \"SKU-XYZ-1001\"\n\n**Important notes**\n- The externalId is distinct from internal system identifiers.\n- Changes to the externalId can disrupt data synchronization.\n- Should be treated as a string to accommodate various formats.\n- May include alphanumeric characters, dashes, or underscores.\n\n**Dependency chain**\n- Often linked to integration or synchronization modules.\n- May depend on external system availability and consistency.\n- Used in API calls that involve cross-system data referencing.\n\n**Technical details**\n- Data type: string.\n- Maximum length may vary based on external system constraints.\n- Should support UTF-8 encoding to handle diverse character sets.\n- May require validation against a specific pattern or format."},"as2":{"$ref":"#/components/schemas/As2"},"dynamodb":{"$ref":"#/components/schemas/Dynamodb"},"http":{"$ref":"#/components/schemas/Http"},"ftp":{"$ref":"#/components/schemas/Ftp"},"jdbc":{"$ref":"#/components/schemas/Jdbc"},"mongodb":{"$ref":"#/components/schemas/Mongodb"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"netsuite_da":{"$ref":"#/components/schemas/NetsuiteDistributed"},"rdbms":{"$ref":"#/components/schemas/Rdbms"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"file":{"$ref":"#/components/schemas/File"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"guardrail":{"$ref":"#/components/schemas/Guardrail"},"name":{"type":"string","description":"The name property represents the identifier or title assigned to an entity, object, or resource within the system. It is typically a human-readable string that uniquely distinguishes the item from others in the same context. This property is essential for referencing, searching, and displaying the entity in user interfaces and API responses.\n\n**Field behavior**\n- Must be a non-empty string.\n- Should be unique within its scope to avoid ambiguity.\n- Often used as a primary label in UI components and logs.\n- May have length constraints depending on system requirements.\n- Typically case-sensitive or case-insensitive based on implementation.\n\n**Implementation guidance**\n- Validate the input to ensure it meets length and character requirements.\n- Enforce uniqueness if required by the application context.\n- Sanitize input to prevent injection attacks or invalid characters.\n- Support internationalization by allowing Unicode characters if applicable.\n- Provide clear error messages when validation fails.\n\n**Examples**\n- \"John Doe\"\n- \"Invoice_2024_001\"\n- \"MainServer\"\n- \"Project Phoenix\"\n- \"user@example.com\"\n\n**Important notes**\n- Avoid using reserved keywords or special characters that may interfere with system processing.\n- Consider trimming whitespace from the beginning and end of the string.\n- The name should be meaningful and descriptive to improve usability.\n- Changes to the name might affect references or links in other parts of the system.\n\n**Dependency chain**\n- May be referenced by other properties such as IDs, URLs, or display labels.\n- Could be linked to permissions or access control mechanisms.\n- Often used in search and filter operations within the API.\n\n**Technical details**\n- Data type: string.\n- Encoding: UTF-8 recommended.\n- Maximum length: typically defined by system constraints (e.g., 255 characters).\n- Validation rules: regex patterns may be applied to restrict allowed characters.\n- Storage considerations: indexed for fast lookup if used frequently in queries."},"description":{"type":"string","description":"A detailed textual explanation that describes the purpose, characteristics, or context of the associated entity or item. This field is intended to provide users with clear and comprehensive information to better understand the subject it represents.\n**Field behavior**\n- Accepts plain text or formatted text depending on implementation.\n- Should be concise yet informative, avoiding overly technical jargon unless necessary.\n- May support multi-line input to allow detailed explanations.\n- Typically optional but recommended for clarity.\n\n**Implementation guidance**\n- Ensure the description is user-friendly and accessible to the target audience.\n- Avoid including sensitive or confidential information.\n- Use consistent terminology aligned with the overall documentation or product language.\n- Consider supporting markdown or rich text formatting if applicable.\n\n**Examples**\n- \"This API endpoint retrieves user profile information including name, email, and preferences.\"\n- \"A unique identifier assigned to each transaction for tracking purposes.\"\n- \"Specifies the date and time when the event occurred, formatted in ISO 8601.\"\n\n**Important notes**\n- Keep descriptions up to date with any changes in functionality or behavior.\n- Avoid redundancy with other fields; focus on clarifying the entity’s role or usage.\n- Descriptions should not contain executable code or commands.\n\n**Dependency chain**\n- Often linked to the entity or property it describes.\n- May influence user understanding and correct usage of the associated field.\n- Can be referenced in user guides, API documentation, or UI tooltips.\n\n**Technical details**\n- Typically stored as a string data type.\n- May have length constraints depending on system limitations.\n- Encoding should support UTF-8 to accommodate international characters.\n- Should be sanitized to prevent injection attacks if rendered in UI contexts.","maxLength":10240},"unencrypted":{"type":"object","description":"Indicates whether the data or content is stored or transmitted without encryption, meaning it is in plain text and not protected by any cryptographic methods. This flag helps determine if additional security measures are necessary to safeguard sensitive information.\n\n**Field behavior**\n- Represents a boolean state where `true` means data is unencrypted and `false` means data is encrypted.\n- Used to identify if the data requires encryption before storage or transmission.\n- May influence processing logic related to data security and compliance.\n\n**Implementation guidance**\n- Ensure this field is explicitly set to reflect the actual encryption status of the data.\n- Use this flag to trigger encryption routines or security checks when data is unencrypted.\n- Validate the field to prevent false positives that could lead to security vulnerabilities.\n\n**Examples**\n- `true` indicating a password stored in plain text (not recommended).\n- `false` indicating a file that has been encrypted before upload.\n- `true` for a message sent over an unsecured channel.\n\n**Important notes**\n- Storing or transmitting data with `unencrypted` set to `true` can expose sensitive information to unauthorized access.\n- This field does not perform encryption itself; it only signals the encryption status.\n- Proper handling of unencrypted data is critical for compliance with data protection regulations.\n\n**Dependency chain**\n- Often used in conjunction with fields specifying encryption algorithms or keys.\n- May affect downstream processes such as logging, auditing, or alerting mechanisms.\n- Related to security policies and access control configurations.\n\n**Technical details**\n- Typically represented as a boolean value.\n- Should be checked before performing operations that assume data confidentiality.\n- May be integrated with security frameworks to enforce encryption standards."},"sampleData":{"type":"object","description":"An array or collection of sample data entries used to demonstrate or test the functionality of the API or application. This data serves as example input or output to help users understand the expected format, structure, and content. It can include various data types depending on the context, such as strings, numbers, objects, or nested arrays.\n\n**Field behavior**\n- Contains example data that mimics real-world usage scenarios.\n- Can be used for testing, validation, or demonstration purposes.\n- May be optional or required depending on the API specification.\n- Should be representative of typical data to ensure meaningful examples.\n\n**Implementation guidance**\n- Populate with realistic and relevant sample entries.\n- Ensure data types and structures align with the API schema.\n- Avoid sensitive or personally identifiable information.\n- Update regularly to reflect any changes in the API data model.\n\n**Examples**\n- A list of user profiles with names, emails, and roles.\n- Sample product entries with IDs, descriptions, and prices.\n- Example sensor readings with timestamps and values.\n- Mock transaction records with amounts and statuses.\n\n**Important notes**\n- Sample data is for illustrative purposes and may not reflect actual production data.\n- Should not be used for live processing or decision-making.\n- Ensure compliance with data privacy and security standards when creating samples.\n\n**Dependency chain**\n- Dependent on the data model definitions within the API.\n- May influence or be influenced by validation rules and schema constraints.\n- Often linked to documentation and testing frameworks.\n\n**Technical details**\n- Typically formatted as JSON arrays or objects.\n- Can include nested structures to represent complex data.\n- Size and complexity should be balanced to optimize performance and clarity.\n- May be embedded directly in API responses or provided as separate resources."},"distributed":{"type":"boolean","description":"Indicates whether the import is using a distributed adaptor, such as a NetSuite Distributed Import.\n**Field behavior**\n- Accepts boolean values: true or false.\n- When true, the import is using a distributed adaptor such as a NetSuite Distributed Import.\n- When false, the import is not using a distributed adaptor.\n\n**Examples**\n- true: When the adaptorType is NetSuiteDistributedImport\n- false: In all other cases"},"maxAttempts":{"type":"number","description":"Specifies the maximum number of attempts allowed for a particular operation or action before it is considered failed or terminated. This property helps control retry logic and prevents infinite loops or excessive retries in processes such as authentication, data submission, or task execution.  \n**Field behavior:**  \n- Defines the upper limit on retry attempts for an operation.  \n- Once the maximum attempts are reached, no further retries are made.  \n- Can be used to trigger fallback mechanisms or error handling after exceeding attempts.  \n**Implementation** GUIDANCE:  \n- Set to a positive integer value representing the allowed retry count.  \n- Should be configured based on the operation's criticality and expected failure rates.  \n- Consider exponential backoff or delay strategies in conjunction with maxAttempts.  \n- Validate input to ensure it is a non-negative integer.  \n**Examples:**  \n- 3 (allowing up to three retry attempts)  \n- 5 (permitting five attempts before failure)  \n- 0 (disabling retries entirely)  \n**Important notes:**  \n- Setting this value too high may cause unnecessary resource consumption or delays.  \n- Setting it too low may result in premature failure without sufficient retry opportunities.  \n- Should be used in combination with timeout settings for robust error handling.  \n**Dependency chain:**  \n- Often used alongside properties like retryDelay, timeout, or backoffStrategy.  \n- May influence or be influenced by error handling or circuit breaker configurations.  \n**Technical details:**  \n- Typically represented as an integer data type.  \n- Enforced by the application logic or middleware managing retries.  \n- May be configurable at runtime or deployment time depending on system design."},"ignoreExisting":{"type":"boolean","description":"**CRITICAL FIELD:** Controls whether the import should skip records that already exist in the target system rather than attempting to update them or throwing errors.\n\nWhen set to true, the import will:\n- Check if each record already exists in the target system (using a lookup configured in the import, or if the record coming into the import has a value populated in the field configured in the ignoreExtract property)\n- Skip processing for records that are found to already exist\n- Continue processing only records that don't exist (new records)\n- This is for CREATE operations where you want to avoid duplicates\n\nWhen set to false or undefined (default):\n- All records are processed normally in the creation process (if applicable)\n- Existing records may be duplicated or may cause errors depending on the API\n- No special existence checking is performed\n\n**When to set this to true (MUST detect these PATTERNS)**\n\n**ALWAYS set ignoreExisting to true** when the user's prompt includes ANY of these patterns:\n\n**Primary Patterns (very common)**\n- \"ignore existing\" / \"ignoring existing\" / \"ignore any existing\"\n- \"skip existing\" / \"skipping existing\" / \"skip any existing\"\n- \"while ignoring existing\" / \"while skipping existing\"\n- \"skip duplicates\" / \"avoid duplicates\" / \"ignore duplicates\"\n\n**Secondary Patterns**\n- \"only create new\" / \"only add new\" / \"create new only\"\n- \"don't update existing\" / \"avoid updating existing\" / \"without updating\"\n- \"create if doesn't exist\" / \"add if doesn't exist\"\n- \"insert only\" / \"create only\"\n- \"skip if already exists\" / \"ignore if exists\"\n- \"don't process existing\" / \"bypass existing\"\n\n**Context Clues**\n- User says \"create\" or \"insert\" AND mentions checking/matching against existing data\n- User wants to add records but NOT modify what's already there\n- User is concerned about duplicate prevention during creation\n\n**Examples**\n\n**Should set ignoreExisting: true**\n- \"Create customer records in Shopify while ignoring existing customers\" → ignoreExisting: true\n- \"Create vendor records while ignoring existing vendors\" → ignoreExisting: true\n- \"Import new products only, skip existing ones\" → ignoreExisting: true\n- \"Add orders to the system, ignore any that already exist\" → ignoreExisting: true\n- \"Create accounts but don't update existing ones\" → ignoreExisting: true\n- \"Insert new contacts, skip duplicates\" → ignoreExisting: true\n- \"Create records, skipping any that match existing data\" → ignoreExisting: true\n\n**Should not set ignoreExisting: true**\n- \"Update existing customer records\" → ignoreExisting: false (update operation)\n- \"Create or update records\" → ignoreExisting: false (upsert operation)\n- \"Sync all records\" → ignoreExisting: false (sync/upsert operation)\n\n**Important**\n- This field is typically used with INSERT operations\n- Common pattern: \"Create X while ignoring existing X\" → MUST set to true\n\n**When to leave this false/undefined**\n\nLeave this false or undefined when:\n- The prompt doesn't mention skipping or ignoring existing records\n- The import is only performing updates (no inserts)\n- The prompt says \"sync\", \"update\", or \"upsert\"\n\n**Important notes**\n- This flag requires a properly configured lookup to identify existing records, or have an ignoreExtract field configured to identify the field that is used to determine if the record already exists.\n- The lookup typically checks a unique identifier field (id, email, external_id, etc.)\n- When true, existing records are silently skipped (not updated, not errored)\n- Use this for insert-only or create-only operations\n- Do NOT use this if the user wants to update existing records\n\n**Technical details**\n- Works in conjunction with lookup configurations to determine if a record exists or has an ignoreExtract field configured to determine if the record already exists.\n- The lookup queries the target system before attempting to create the record\n- If the lookup returns a match, the record is skipped\n- If the lookup returns no match, the record is processed normally"},"ignoreMissing":{"type":"boolean","description":"Indicates whether the system should ignore missing or non-existent resources during processing. When set to true, the operation will proceed without error even if some referenced items are not found; when false, the absence of required resources will cause the operation to fail or return an error.  \n**Field behavior:**  \n- Controls error handling related to missing resources.  \n- Allows operations to continue gracefully when some data is unavailable.  \n- Affects the robustness and fault tolerance of the process.  \n**Implementation guidance:**  \n- Use true to enable lenient processing where missing data is acceptable.  \n- Use false to enforce strict validation and fail fast on missing resources.  \n- Ensure that downstream logic can handle partial results if ignoring missing items.  \n**Examples:**  \n- ignoreMissing: true — skips over missing files during batch processing.  \n- ignoreMissing: false — throws an error if a referenced database entry is not found.  \n**Important notes:**  \n- Setting this to true may lead to incomplete results if missing data is critical.  \n- Default behavior should be clearly documented to avoid unexpected failures.  \n- Consider logging or reporting missing items even when ignoring them.  \n**Dependency** CHAIN:  \n- Often used in conjunction with resource identifiers or references.  \n- May impact validation and error reporting modules.  \n**Technical details:**  \n- Typically a boolean flag.  \n- Influences control flow and exception handling mechanisms.  \n- Should be clearly defined in API contracts to manage client expectations."},"idLockTemplate":{"type":"string","description":"The unique identifier for the lock template used to define the configuration and behavior of a lock within the system. This ID links the lock instance to a predefined template that specifies its properties, access rules, and operational parameters.\n\n**Field behavior**\n- Uniquely identifies a lock template within the system.\n- Used to associate a lock instance with its configuration template.\n- Typically immutable once assigned to ensure consistent behavior.\n\n**Implementation guidance**\n- Must be a valid identifier corresponding to an existing lock template.\n- Should be validated against the list of available templates before assignment.\n- Ensure uniqueness to prevent conflicts between different lock configurations.\n\n**Examples**\n- \"template-12345\"\n- \"lockTemplateA1B2C3\"\n- \"LT-987654321\"\n\n**Important notes**\n- Changing the template ID after lock creation may lead to inconsistent lock behavior.\n- The template defines critical lock parameters; ensure the correct template is referenced.\n- This field is essential for lock provisioning and management workflows.\n\n**Dependency chain**\n- Depends on the existence of predefined lock templates in the system.\n- Used by lock management and access control modules to apply configurations.\n\n**Technical details**\n- Typically represented as a string or UUID.\n- Should conform to the system’s identifier format standards.\n- Indexed for efficient lookup and retrieval in the database."},"dataURITemplate":{"type":"string","description":"A template string used to construct data URIs dynamically by embedding variable placeholders that are replaced with actual values at runtime. This template facilitates the generation of standardized data URIs for accessing or referencing resources within the system.\n\n**Field behavior**\n- Accepts a string containing placeholders for variables.\n- Placeholders are replaced with corresponding runtime values to form a complete data URI.\n- Enables dynamic and flexible URI generation based on context or input parameters.\n- Supports standard URI encoding where necessary to ensure valid URI formation.\n\n**Implementation guidance**\n- Define clear syntax for variable placeholders within the template (e.g., using curly braces or another delimiter).\n- Ensure that all required variables are provided at runtime to avoid incomplete URIs.\n- Validate the final constructed URI to confirm it adheres to URI standards.\n- Consider supporting optional variables with default values to increase template flexibility.\n\n**Examples**\n- \"data:text/plain;base64,{base64EncodedData}\"\n- \"data:{mimeType};charset={charset},{data}\"\n- \"data:image/{imageFormat};base64,{encodedImageData}\"\n\n**Important notes**\n- The template must produce a valid data URI after variable substitution.\n- Proper encoding of variable values is critical to prevent malformed URIs.\n- This property is essential for scenarios requiring inline resource embedding or data transmission via URIs.\n- Misconfiguration or missing variables can lead to invalid or unusable URIs.\n\n**Dependency chain**\n- Relies on the availability of runtime variables corresponding to placeholders.\n- May depend on encoding utilities to prepare variable values.\n- Often used in conjunction with data processing or resource generation components.\n\n**Technical details**\n- Typically a UTF-8 encoded string.\n- Supports standard URI schemes and encoding rules.\n- Variable placeholders should be clearly defined and consistently parsed.\n- May integrate with templating engines or string interpolation mechanisms."},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"blobKeyPath":{"type":"string","description":"Specifies the file system path to the blob key, which uniquely identifies the binary large object (blob) within the storage system. This path is used to locate and access the blob data efficiently during processing or retrieval operations.\n**Field behavior**\n- Represents a string path pointing to the location of the blob key.\n- Used as a reference to access the associated blob data.\n- Typically immutable once set to ensure consistent access.\n- May be required for operations involving blob retrieval or manipulation.\n**Implementation guidance**\n- Ensure the path is valid and correctly formatted according to the storage system's conventions.\n- Validate the existence of the blob key at the specified path before processing.\n- Handle cases where the path may be null or empty gracefully.\n- Secure the path to prevent unauthorized access to blob data.\n**Examples**\n- \"/data/blobs/abc123def456\"\n- \"blobstore/keys/2024/06/blobkey789\"\n- \"s3://bucket-name/blob-keys/key123\"\n**Important notes**\n- The blobKeyPath must correspond to an existing blob key in the storage backend.\n- Incorrect or malformed paths can lead to failures in blob retrieval.\n- Access permissions to the path must be properly configured.\n- The format of the path may vary depending on the underlying storage technology.\n**Dependency chain**\n- Dependent on the storage system's directory or key structure.\n- Used by blob retrieval and processing components.\n- May be linked to metadata describing the blob.\n**Technical details**\n- Typically represented as a UTF-8 encoded string.\n- May include hierarchical directory structures or URI schemes.\n- Should be sanitized to prevent injection or path traversal vulnerabilities.\n- Often used as a lookup key in blob storage APIs or databases."},"blob":{"type":"boolean","description":"The binary large object (blob) representing the raw data content to be processed, stored, or transmitted. This field typically contains encoded or serialized data such as images, files, or other multimedia content in a compact binary format.\n\n**Field behavior**\n- Holds the actual binary data payload.\n- Can represent various data types including images, documents, or other file formats.\n- May be encoded in formats like Base64 for safe transmission over text-based protocols.\n- Treated as opaque data by the API, with no interpretation unless specified.\n\n**Implementation guidance**\n- Ensure the blob data is properly encoded (e.g., Base64) if the transport medium requires text-safe encoding.\n- Validate the size and format of the blob to meet API constraints.\n- Handle decoding and encoding consistently on both client and server sides.\n- Use streaming or chunking for very large blobs to optimize performance.\n\n**Examples**\n- A Base64-encoded JPEG image file.\n- A serialized JSON object converted into a binary format.\n- A PDF document encoded as a binary blob.\n- An audio file represented as a binary stream.\n\n**Important notes**\n- The blob content is typically opaque and should not be altered during transmission.\n- Size limits may apply depending on API or transport constraints.\n- Proper encoding and decoding are critical to avoid data corruption.\n- Security considerations should be taken into account when handling binary data.\n\n**Dependency chain**\n- May depend on encoding schemes (e.g., Base64) for safe transmission.\n- Often used in conjunction with metadata fields describing the blob type or format.\n- Requires appropriate content-type headers or descriptors for correct interpretation.\n\n**Technical details**\n- Usually represented as a byte array or Base64-encoded string in JSON APIs.\n- May require MIME type specification to indicate the nature of the data.\n- Handling may involve buffer management and memory considerations.\n- Supports binary-safe transport mechanisms to preserve data integrity."},"assistant":{"type":"string","description":"Specifies the configuration and behavior settings for the AI assistant that will interact with the user. This property defines how the assistant responds, including its personality, knowledge scope, response style, and any special instructions or constraints that guide its operation.\n\n**Field behavior**\n- Determines the assistant's tone, style, and manner of communication.\n- Controls the knowledge base or data sources the assistant can access.\n- Enables customization of the assistant's capabilities and limitations.\n- May include parameters for language, verbosity, and response format.\n\n**Implementation guidance**\n- Ensure the assistant configuration aligns with the intended user experience.\n- Validate that all required sub-properties within the assistant configuration are correctly set.\n- Support dynamic updates to the assistant settings to adapt to different contexts or user needs.\n- Provide defaults for unspecified settings to maintain consistent behavior.\n\n**Examples**\n- Setting the assistant to a formal tone with technical expertise.\n- Configuring the assistant to provide concise answers with references.\n- Defining the assistant to operate within a specific domain, such as healthcare or finance.\n- Enabling multi-language support for the assistant responses.\n\n**Important notes**\n- Changes to the assistant property can significantly affect user interaction quality.\n- Properly securing and validating assistant configurations is critical to prevent misuse.\n- The assistant's behavior should comply with ethical guidelines and privacy regulations.\n- Overly restrictive settings may limit the assistant's usefulness, while too broad settings may reduce relevance.\n\n**Dependency chain**\n- May depend on user preferences or session context.\n- Interacts with the underlying AI model and its capabilities.\n- Influences downstream processing of user inputs and outputs.\n- Can be linked to external knowledge bases or APIs for enhanced responses.\n\n**Technical details**\n- Typically structured as an object containing multiple nested properties.\n- May include fields such as personality traits, knowledge cutoff dates, and response constraints.\n- Supports serialization and deserialization for API communication.\n- Requires compatibility with the AI platform's configuration schema."},"deleteAfterImport":{"type":"boolean","description":"Indicates whether the source file should be deleted automatically after the import process completes successfully. This boolean flag helps manage storage by removing files that are no longer needed once their data has been ingested.\n\n**Field behavior**\n- When set to true, the system deletes the source file immediately after a successful import.\n- When set to false or omitted, the source file remains intact after import.\n- Deletion only occurs if the import process completes without errors.\n\n**Implementation guidance**\n- Validate that the import was successful before deleting the file.\n- Ensure proper permissions are in place to delete the file.\n- Consider logging deletion actions for audit purposes.\n- Provide clear user documentation about the implications of enabling this flag.\n\n**Examples**\n- true: The file \"data.csv\" will be removed after import.\n- false: The file \"data.csv\" will remain after import for manual review or backup.\n\n**Important notes**\n- Enabling this flag may result in data loss if the file is needed for re-import or troubleshooting.\n- Use with caution in environments where file retention policies are strict.\n- This flag does not affect files that fail to import.\n\n**Dependency chain**\n- Depends on the successful completion of the import operation.\n- May interact with file system permissions and cleanup routines.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Triggers a file deletion API or system call post-import.\n- Should handle exceptions gracefully to avoid orphaned files or unintended deletions."},"assistantMetadata":{"type":"object","description":"Metadata containing additional information about the assistant, such as configuration details, versioning, capabilities, and any custom attributes that help in managing or identifying the assistant's behavior and context. This metadata supports enhanced control, monitoring, and customization of the assistant's interactions.\n\n**Field behavior**\n- Holds supplementary data that describes or configures the assistant.\n- Can include static or dynamic information relevant to the assistant's operation.\n- Used to influence or inform the assistant's responses and functionality.\n- May be updated or extended to reflect changes in the assistant's capabilities or environment.\n\n**Implementation guidance**\n- Structure metadata in a clear, consistent format (e.g., key-value pairs).\n- Ensure sensitive information is not exposed through metadata.\n- Use metadata to enable feature toggles, version tracking, or context awareness.\n- Validate metadata content to maintain integrity and compatibility.\n\n**Examples**\n- Version number of the assistant (e.g., \"v1.2.3\").\n- Supported languages or locales.\n- Enabled features or modules.\n- Custom tags indicating the assistant's domain or purpose.\n\n**Important notes**\n- Metadata should not contain user-specific or sensitive personal data.\n- Keep metadata concise to avoid performance overhead.\n- Changes in metadata may affect assistant behavior; manage updates carefully.\n- Metadata is primarily for internal use and may not be exposed to end users.\n\n**Dependency chain**\n- May depend on assistant configuration settings.\n- Can influence or be referenced by response generation logic.\n- Interacts with monitoring and analytics systems for tracking.\n\n**Technical details**\n- Typically represented as a JSON object or similar structured data.\n- Should be easily extensible to accommodate future attributes.\n- Must be compatible with the overall assistant API schema.\n- Should support serialization and deserialization without data loss."},"useTechAdaptorForm":{"type":"boolean","description":"Indicates whether the technical adaptor form should be utilized in the current process or workflow. This boolean flag determines if the system should enable and display the technical adaptor form interface for user interaction or automated processing.\n\n**Field behavior**\n- When set to true, the technical adaptor form is activated and presented to the user or system.\n- When set to false, the technical adaptor form is bypassed or hidden.\n- Influences the flow of data input and validation related to technical adaptor configurations.\n\n**Implementation guidance**\n- Default to false if the technical adaptor form is optional or not always required.\n- Ensure that enabling this flag triggers all necessary UI components and backend processes related to the technical adaptor form.\n- Validate that dependent fields or modules respond appropriately when this flag changes state.\n\n**Examples**\n- true: The system displays the technical adaptor form for configuration.\n- false: The system skips the technical adaptor form and proceeds without it.\n\n**Important notes**\n- Changing this flag may affect downstream processing and data integrity.\n- Must be synchronized with user permissions and feature availability.\n- Should be clearly documented in user guides if exposed in UI settings.\n\n**Dependency chain**\n- May depend on user roles or feature toggles enabling technical adaptor functionality.\n- Could influence or be influenced by other configuration flags related to adaptor workflows.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Used in conditional logic to control UI rendering and backend processing paths.\n- May trigger event listeners or hooks when its value changes."},"distributedAdaptorData":{"type":"object","description":"Contains configuration and operational data specific to distributed adaptors used within the system. This data includes parameters, status information, and metadata necessary for managing and monitoring distributed adaptor instances across different nodes or environments. It facilitates synchronization, performance tuning, and error handling for adaptors operating in a distributed architecture.\n\n**Field behavior**\n- Holds detailed information about each distributed adaptor instance.\n- Supports dynamic updates to reflect real-time adaptor status and configuration changes.\n- Enables centralized management and monitoring of distributed adaptors.\n- May include nested structures representing various adaptor attributes and metrics.\n\n**Implementation guidance**\n- Ensure data consistency and integrity when updating distributedAdaptorData.\n- Use appropriate serialization formats to handle complex nested data.\n- Implement validation to verify the correctness of adaptor configuration parameters.\n- Design for scalability to accommodate varying numbers of distributed adaptors.\n\n**Examples**\n- Configuration settings for a distributed database adaptor including connection strings and timeout values.\n- Status reports indicating the health and performance metrics of adaptors deployed across multiple servers.\n- Metadata describing adaptor version, deployment environment, and last synchronization timestamp.\n\n**Important notes**\n- This property is critical for systems relying on distributed adaptors to function correctly.\n- Changes to this data should be carefully managed to avoid disrupting adaptor operations.\n- Sensitive information within adaptor data should be secured and access-controlled.\n\n**Dependency chain**\n- Dependent on the overall system configuration and deployment topology.\n- Interacts with monitoring and management modules that consume adaptor data.\n- May influence load balancing and failover mechanisms within the distributed system.\n\n**Technical details**\n- Typically represented as a complex object or map with key-value pairs.\n- May include arrays or lists to represent multiple adaptor instances.\n- Requires efficient parsing and serialization to minimize performance overhead.\n- Should support versioning to handle changes in adaptor data schema over time."},"filter":{"allOf":[{"description":"Filter configuration for selectively processing records during import operations.\n\n**Import-specific behavior**\n\n**Pre-Import Filtering**: Filters are applied to incoming records before they are sent to the destination system. Records that don't match the filter criteria are silently dropped and not imported.\n\n**Available filter fields**\n\nThe fields available for filtering are the data fields from each record being imported.\n"},{"$ref":"#/components/schemas/Filter"}]},"traceKeyTemplate":{"type":"string","description":"A template string used to generate unique trace keys for tracking and correlating requests across distributed systems. This template supports placeholders that can be dynamically replaced with runtime values such as timestamps, unique identifiers, or contextual metadata to create meaningful and consistent trace keys.\n\n**Field behavior**\n- Defines the format and structure of trace keys used in logging and monitoring.\n- Supports dynamic insertion of variables or placeholders to customize trace keys per request.\n- Ensures trace keys are unique and consistent for effective traceability.\n- Can be used to correlate logs, metrics, and traces across different services.\n\n**Implementation guidance**\n- Use clear and descriptive placeholders that map to relevant runtime data (e.g., {timestamp}, {requestId}).\n- Ensure the template produces trace keys that are unique enough to avoid collisions.\n- Validate the template syntax before applying it to avoid runtime errors.\n- Consider including service names or environment identifiers for multi-service deployments.\n- Keep the template concise to avoid excessively long trace keys.\n\n**Examples**\n- \"trace-{timestamp}-{requestId}\"\n- \"{serviceName}-{env}-{uniqueId}\"\n- \"traceKey-{userId}-{sessionId}-{epochMillis}\"\n\n**Important notes**\n- The template must be compatible with the tracing and logging infrastructure in use.\n- Placeholders should be well-defined and documented to ensure consistent usage.\n- Improper templates may lead to trace key collisions or loss of traceability.\n- Avoid sensitive information in trace keys to maintain security and privacy.\n\n**Dependency chain**\n- Relies on runtime context or metadata to replace placeholders.\n- Integrates with logging, monitoring, and tracing systems that consume trace keys.\n- May depend on unique identifier generators or timestamp providers.\n\n**Technical details**\n- Typically implemented as a string with placeholder syntax (e.g., curly braces).\n- Requires a parser or formatter to replace placeholders with actual values at runtime.\n- Should support extensibility to add new placeholders as needed.\n- May need to conform to character restrictions imposed by downstream systems."},"mockResponse":{"type":"object","description":"Defines a predefined response that the system will return when the associated API endpoint is invoked, allowing for simulation of real API behavior without requiring actual backend processing. This is particularly useful for testing, development, and demonstration purposes where consistent and controlled responses are needed.\n\n**Field behavior**\n- Returns the specified mock data instead of executing the real API logic.\n- Can simulate various response scenarios including success, error, and edge cases.\n- Overrides the default response when enabled.\n- Supports static or dynamic content depending on implementation.\n\n**Implementation guidance**\n- Ensure the mock response format matches the expected API response schema.\n- Use realistic data to closely mimic actual API behavior.\n- Update mock responses regularly to reflect changes in the real API.\n- Consider including headers, status codes, and body content in the mock response.\n- Provide clear documentation on when and how the mock response is used.\n\n**Examples**\n- Returning a fixed JSON object representing a user profile.\n- Simulating a 404 error response with an error message.\n- Providing a list of items with pagination metadata.\n- Mocking a successful transaction confirmation message.\n\n**Important notes**\n- Mock responses should not be used in production environments unless explicitly intended.\n- They are primarily for development, testing, and demonstration.\n- Overuse of mock responses can mask issues in the real API implementation.\n- Ensure that consumers of the API are aware when mock responses are active.\n\n**Dependency chain**\n- Dependent on the API endpoint configuration.\n- May interact with request parameters to determine the appropriate mock response.\n- Can be linked with feature flags or environment settings to toggle mock behavior.\n\n**Technical details**\n- Typically implemented as static JSON or XML payloads.\n- May include HTTP status codes and headers.\n- Can be integrated with API gateways or mocking tools.\n- Supports conditional logic in advanced implementations to vary responses."},"_ediProfileId":{"type":"string","format":"objectId","description":"The unique identifier for the Electronic Data Interchange (EDI) profile associated with the transaction or entity. This ID links the current data to a specific EDI configuration that defines the format, protocols, and trading partner details used for electronic communication.\n\n**Field behavior**\n- Uniquely identifies an EDI profile within the system.\n- Used to retrieve or reference EDI settings and parameters.\n- Must be consistent and valid to ensure proper EDI processing.\n- Typically immutable once assigned to maintain data integrity.\n\n**Implementation guidance**\n- Ensure the ID corresponds to an existing and active EDI profile in the system.\n- Validate the format and existence of the ID before processing transactions.\n- Use this ID to fetch EDI-specific rules, mappings, and partner information.\n- Secure the ID to prevent unauthorized changes or misuse.\n\n**Examples**\n- \"EDI12345\"\n- \"PROF-67890\"\n- \"X12_PROFILE_001\"\n\n**Important notes**\n- This ID is critical for routing and formatting EDI messages correctly.\n- Incorrect or missing IDs can lead to failed or misrouted EDI transmissions.\n- The ID should be managed centrally to avoid duplication or conflicts.\n\n**Dependency chain**\n- Depends on the existence of predefined EDI profiles in the system.\n- Used by EDI processing modules to apply correct communication standards.\n- May be linked to trading partner configurations and document types.\n\n**Technical details**\n- Typically a string or alphanumeric value.\n- Stored in a database with indexing for quick lookup.\n- May follow a naming convention defined by the organization or EDI standards.\n- Used as a foreign key in relational data models involving EDI transactions."},"parsers":{"type":["string","null"],"enum":["1"],"description":"A collection of parser configurations that define how input data should be interpreted and processed. Each parser specifies rules, patterns, or formats to extract meaningful information from raw data sources, enabling the system to handle diverse data types and structures effectively. This property allows customization and extension of parsing capabilities to accommodate various input formats.\n\n**Field behavior**\n- Accepts multiple parser definitions as an array or list.\n- Each parser operates independently to process specific data formats.\n- Parsers are applied in the order they are defined unless otherwise specified.\n- Supports enabling or disabling individual parsers dynamically.\n- Can include built-in or custom parser implementations.\n\n**Implementation guidance**\n- Ensure parsers are well-defined with clear matching criteria and extraction rules.\n- Validate parser configurations to prevent conflicts or overlaps.\n- Provide mechanisms to add, update, or remove parsers without downtime.\n- Support extensibility to integrate new parsing logic as needed.\n- Document each parser’s purpose and expected input/output formats.\n\n**Examples**\n- A JSON parser that extracts fields from JSON-formatted input.\n- A CSV parser that splits input lines into columns based on delimiters.\n- A regex-based parser that identifies patterns within unstructured text.\n- An XML parser that navigates hierarchical data structures.\n- A custom parser designed to interpret proprietary log file formats.\n\n**Important notes**\n- Incorrect parser configurations can lead to data misinterpretation or processing errors.\n- Parsers should be optimized for performance to handle large volumes of data efficiently.\n- Consider security implications when parsing untrusted input to avoid injection attacks.\n- Testing parsers with representative sample data is crucial for reliability.\n- Parsers may depend on external libraries or modules; ensure compatibility.\n\n**Dependency chain**\n- Relies on input data being available and accessible.\n- May depend on schema definitions or data contracts for accurate parsing.\n- Interacts with downstream components that consume parsed output.\n- Can be influenced by global settings such as character encoding or locale.\n- May require synchronization with data validation and transformation steps.\n\n**Technical details**\n- Typically implemented as modular components or plugins.\n- Configurations may include pattern definitions, field mappings, and error handling rules.\n- Supports various data formats including text, binary, and structured documents.\n- May expose APIs or interfaces for runtime configuration and monitoring.\n- Often integrated with logging and debugging tools to trace parsing operations."},"hooks":{"type":"object","properties":{"preMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the pre-mapping hook phase. This function allows for custom logic to be applied before the main mapping process begins, enabling data transformation, validation, or any preparatory steps required. It should be defined in a way that it can be invoked with the appropriate context and input data.\n\n**Field behavior**\n- Executed before the mapping process starts.\n- Receives input data and context for processing.\n- Can modify or validate data prior to mapping.\n- Should return the processed data or relevant output for the next stage.\n\n**Implementation guidance**\n- Define the function with clear input parameters and expected output.\n- Ensure the function handles errors gracefully to avoid interrupting the mapping flow.\n- Keep the function focused on pre-mapping concerns only.\n- Test the function independently to verify its behavior before integration.\n\n**Examples**\n- A function that normalizes input data formats.\n- A function that filters out invalid entries.\n- A function that enriches data with additional attributes.\n- A function that logs input data for auditing purposes.\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous operations.\n- Avoid side effects that could impact other parts of the mapping process.\n- Ensure compatibility with the overall data pipeline and mapping framework.\n- Document the function’s purpose and usage clearly for maintainability.\n\n**Dependency chain**\n- Invoked after the preMap hook is triggered.\n- Outputs data consumed by the subsequent mapping logic.\n- May depend on external utilities or services for data processing.\n\n**Technical details**\n- Typically implemented as a JavaScript or TypeScript function.\n- Receives parameters such as input data object and context metadata.\n- Returns a transformed data object or a promise resolving to it.\n- Integrated into the mapping engine’s hook system for execution timing."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed during the pre-mapping hook phase. This ID is used to reference and invoke the specific script that performs custom logic or transformations before the main mapping process begins.\n**Field behavior**\n- Specifies which script is triggered before the mapping operation.\n- Must correspond to a valid and existing script within the system.\n- Determines the custom pre-processing logic applied to data.\n**Implementation guidance**\n- Ensure the script ID is correctly registered and accessible in the environment.\n- Validate the script's compatibility with the preMap hook context.\n- Use consistent naming conventions for script IDs to avoid conflicts.\n**Examples**\n- \"script12345\"\n- \"preMapTransform_v2\"\n- \"hookScript_001\"\n**Important notes**\n- An invalid or missing script ID will result in the preMap hook being skipped or causing an error.\n- The script associated with this ID should be optimized for performance to avoid delays.\n- Permissions must be set appropriately to allow execution of the referenced script.\n**Dependency chain**\n- Depends on the existence of the script repository or script management system.\n- Linked to the preMap hook lifecycle event in the mapping process.\n- May interact with input data and influence subsequent mapping steps.\n**Technical details**\n- Typically represented as a string identifier.\n- Used internally by the system to locate and execute the script.\n- May be stored in a database or configuration file referencing available scripts."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the current operation or context. This ID is used to reference and manage the specific stack within the system, ensuring that all related resources and actions are correctly linked.  \n**Field behavior:**  \n- Uniquely identifies a stack instance within the environment.  \n- Used internally to track and manage stack-related operations.  \n- Typically immutable once assigned for a given stack lifecycle.  \n**Implementation guidance:**  \n- Should be generated or assigned by the system managing the stacks.  \n- Must be validated to ensure uniqueness within the scope of the application.  \n- Should be securely stored and transmitted to prevent unauthorized access or manipulation.  \n**Examples:**  \n- \"stack-12345abcde\"  \n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/50d6f0a0-1234-11e8-9a8b-0a1234567890\"  \n- \"projX-envY-stackZ\"  \n**Important notes:**  \n- This ID is critical for correlating resources and operations to the correct stack.  \n- Changing the stack ID after creation can lead to inconsistencies and errors.  \n- Should not be confused with other identifiers like resource IDs or deployment IDs.  \n**Dependency** CHAIN:  \n- Depends on the stack creation or registration process to obtain the ID.  \n- Used by hooks, deployment scripts, and resource managers to reference the stack.  \n**Technical details:**  \n- Typically a string value, possibly following a specific format or pattern.  \n- May include alphanumeric characters, dashes, and colons depending on the system.  \n- Often used as a key in databases or APIs to retrieve stack information."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the preMap hook, allowing customization of its execution prior to the mapping process. This property enables users to specify options such as input validation rules, transformation parameters, or conditional logic that tailor the hook's operation to specific requirements.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the preMap hook.\n- Influences how the preMap hook processes data before mapping occurs.\n- Can enable or disable specific features or validations within the hook.\n- Supports dynamic adjustment of hook behavior based on provided settings.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema to ensure correctness.\n- Provide default values for optional settings to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n- Allow extensibility to accommodate future configuration parameters without breaking compatibility.\n\n**Examples**\n- Setting a flag to enable strict input validation before mapping.\n- Specifying a list of fields to exclude from the mapping process.\n- Defining transformation rules such as trimming whitespace or converting case.\n- Providing conditional logic parameters to skip mapping under certain conditions.\n\n**Important notes**\n- Incorrect configuration values may cause the preMap hook to fail or behave unexpectedly.\n- Configuration should be tested thoroughly to ensure it aligns with the intended data processing flow.\n- Changes to configuration may affect downstream processes relying on the mapped data.\n- Sensitive information should not be included in the configuration to avoid security risks.\n\n**Dependency chain**\n- Depends on the preMap hook being enabled and invoked during the processing pipeline.\n- Influences the input data that will be passed to subsequent mapping stages.\n- May interact with other hook configurations or global settings affecting data transformation.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and applied at runtime before the mapping logic executes.\n- Supports nested configuration properties for complex customization.\n- Should be immutable during the execution of the preMap hook to ensure consistency."}},"description":"A hook function that is executed before the mapping process begins. This function allows for custom preprocessing or transformation of data prior to the main mapping logic being applied. It is typically used to modify input data, validate conditions, or set up necessary context for the mapping operation.\n\n**Field behavior**\n- Invoked once before the mapping starts.\n- Receives the raw input data or context.\n- Can modify or replace the input data before mapping.\n- Can abort or alter the flow based on custom logic.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment.\n- Ensure it returns the modified data or context for the mapping to proceed.\n- Handle errors gracefully to avoid disrupting the mapping process.\n- Use for data normalization, enrichment, or validation before mapping.\n\n**Examples**\n- Sanitizing input strings to remove unwanted characters.\n- Adding default values to missing fields.\n- Logging or auditing input data before transformation.\n- Validating input schema and throwing errors if invalid.\n\n**Important notes**\n- This hook is optional; if not provided, mapping proceeds with original input.\n- Changes made here directly affect the mapping outcome.\n- Avoid heavy computations to prevent performance bottlenecks.\n- Should not perform side effects that depend on mapping results.\n\n**Dependency chain**\n- Executed before the main mapping function.\n- Influences the data passed to subsequent mapping hooks or steps.\n- May affect downstream validation or output generation.\n\n**Technical details**\n- Typically implemented as a function with signature (inputData, context) => modifiedData.\n- Supports both synchronous return values and Promises for async operations.\n- Integrated into the mapping pipeline as the first step.\n- Can be configured or replaced depending on mapping framework capabilities."},"postMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed during the post-map hook phase. This function is invoked after the mapping process completes, allowing for custom processing, validation, or transformation of the mapped data. It should be defined within the appropriate scope and adhere to the expected signature for post-map hook functions.\n\n**Field behavior**\n- Defines the callback function to run after the mapping operation.\n- Enables customization and extension of the mapping logic.\n- Executes only once per mapping cycle during the post-map phase.\n- Can modify or augment the mapped data before final output.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function in the execution context.\n- The function should handle any necessary error checking and data validation.\n- Avoid long-running or blocking operations within the function to maintain performance.\n- Document the function’s expected inputs and outputs clearly.\n\n**Examples**\n- \"validateMappedData\"\n- \"transformPostMapping\"\n- \"logMappingResults\"\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous behavior if supported.\n- If the function is not defined or invalid, the post-map hook will be skipped without error.\n- This hook is optional but recommended for complex mapping scenarios requiring additional processing.\n\n**Dependency chain**\n- Depends on the mapping process completing successfully.\n- May rely on data structures produced by the mapping phase.\n- Should be compatible with other hooks or lifecycle events in the mapping pipeline.\n\n**Technical details**\n- Expected to be a string representing the function name.\n- The function should accept parameters as defined by the hook interface.\n- Execution context must have access to the function at runtime.\n- Errors thrown within the function may affect the overall mapping operation depending on error handling policies."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed in the postMap hook. This ID is used to reference and invoke the specific script after the mapping process completes, enabling custom logic or transformations to be applied.  \n**Field behavior:**  \n- Accepts a string representing the script's unique ID.  \n- Must correspond to an existing script within the system.  \n- Triggers the execution of the associated script after the mapping operation finishes.  \n**Implementation guidance:**  \n- Ensure the script ID is valid and the script is properly deployed before referencing it here.  \n- Use this field to extend or customize the behavior of the mapping process via scripting.  \n- Validate the script ID to prevent runtime errors during hook execution.  \n**Examples:**  \n- \"script_12345\"  \n- \"postMapTransformScript\"  \n- \"customHookScript_v2\"  \n**Important notes:**  \n- The script referenced must be compatible with the postMap hook context.  \n- Incorrect or missing script IDs will result in the hook not executing as intended.  \n- This field is optional if no post-mapping script execution is required.  \n**Dependency chain:**  \n- Depends on the existence of the script repository or script management system.  \n- Relies on the postMap hook lifecycle event to trigger execution.  \n**Technical details:**  \n- Typically a string data type.  \n- Used internally to fetch and execute the script logic.  \n- May require permissions or access rights to execute the referenced script."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-map hook. This ID is used to reference and manage the specific stack instance within the system, enabling precise tracking and manipulation of resources or configurations tied to that stack during post-processing operations.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used during post-map hook execution to associate actions with the correct stack.\n- Immutable once set to ensure consistent reference throughout the lifecycle.\n- Required for operations that involve stack-specific context or resource management.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be validated for format and existence before use.\n- Ensure secure handling to prevent unauthorized access or manipulation.\n- Typically generated by the system and passed to hooks automatically.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for linking hook operations to the correct stack context.\n- Incorrect or missing _stackId can lead to failed or misapplied post-map operations.\n- Should not be altered manually once assigned.\n\n**Dependency chain**\n- Depends on the stack creation or registration process that generates the ID.\n- Used by post-map hooks to perform stack-specific logic.\n- May influence downstream processes that rely on stack context.\n\n**Technical details**\n- Typically a string value.\n- Format may vary depending on the system's stack naming conventions.\n- Stored and transmitted securely to maintain integrity.\n- Used as a key in database queries or API calls related to stack management."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the postMap hook. This object allows customization of the hook's execution by specifying various options and values that control its operation.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the postMap hook.\n- Determines how the postMap hook processes data after mapping.\n- Can include flags, thresholds, or other parameters influencing hook logic.\n- Optional or required depending on the specific implementation of the postMap hook.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema before use.\n- Ensure all required fields within the configuration are present and correctly typed.\n- Use default values for any missing optional parameters to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n\n**Examples**\n- `{ \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"timeout\": 5000, \"retryOnFailure\": false }`\n- `{ \"mappingStrategy\": \"strict\", \"caseSensitive\": true }`\n\n**Important notes**\n- Incorrect configuration values may cause the postMap hook to fail or behave unexpectedly.\n- Configuration should be tailored to the specific needs of the postMap operation.\n- Changes to configuration may require restarting or reinitializing the hook process.\n\n**Dependency chain**\n- Depends on the postMap hook being invoked.\n- May influence downstream processes that rely on the output of the postMap hook.\n- Interacts with other hook configurations if multiple hooks are chained.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and validated at runtime before the hook executes.\n- Supports nested objects and arrays if the hook logic requires complex configurations."}},"description":"A hook function that is executed after the mapping process is completed. This function allows for custom logic to be applied to the mapped data, enabling modifications, validations, or side effects based on the results of the mapping operation. It receives the mapped data as input and can return a modified version or perform asynchronous operations.\n\n**Field behavior**\n- Invoked immediately after the mapping process finishes.\n- Receives the output of the mapping as its input parameter.\n- Can modify, augment, or validate the mapped data.\n- Supports synchronous or asynchronous execution.\n- The returned value from this hook replaces or updates the mapped data.\n\n**Implementation guidance**\n- Implement as a function that accepts the mapped data object.\n- Ensure proper handling of asynchronous operations if needed.\n- Use this hook to enforce business rules or data transformations post-mapping.\n- Avoid side effects that could interfere with subsequent processing unless intentional.\n- Validate the integrity of the data before returning it.\n\n**Examples**\n- Adjusting date formats or normalizing strings after mapping.\n- Adding computed properties based on mapped fields.\n- Logging or auditing mapped data for monitoring purposes.\n- Filtering out unwanted fields or entries from the mapped result.\n- Triggering notifications or events based on mapped data content.\n\n**Important notes**\n- This hook is optional and only invoked if defined.\n- Errors thrown within this hook may affect the overall mapping operation.\n- The hook should be performant to avoid slowing down the mapping pipeline.\n- Returned data must conform to expected schema to prevent downstream errors.\n\n**Dependency chain**\n- Depends on the completion of the primary mapping function.\n- May influence subsequent processing steps that consume the mapped data.\n- Should be coordinated with pre-mapping hooks to maintain data consistency.\n\n**Technical details**\n- Typically implemented as a callback or promise-returning function.\n- Receives one argument: the mapped data object.\n- Returns either the modified data object or a promise resolving to it.\n- Integrated into the mapping lifecycle after the main mapping logic completes."},"postSubmit":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed after the submission process completes. This function is typically used to perform post-submission tasks such as data processing, notifications, or cleanup operations.\n\n**Field behavior**\n- Invoked automatically after the main submission event finishes.\n- Can trigger additional workflows or side effects based on submission results.\n- Supports asynchronous execution depending on the implementation.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function within the execution context.\n- Validate that the function handles errors gracefully to avoid disrupting the overall submission flow.\n- Document the expected input parameters and output behavior of the function for maintainability.\n\n**Examples**\n- \"sendConfirmationEmail\"\n- \"logSubmissionData\"\n- \"updateUserStatus\"\n\n**Important notes**\n- The function must be idempotent if the submission process can be retried.\n- Avoid long-running operations within this function to prevent blocking the submission pipeline.\n- Security considerations should be taken into account, especially if the function interacts with external systems.\n\n**Dependency chain**\n- Depends on the successful completion of the submission event.\n- May rely on data generated or modified during the submission process.\n- Could trigger downstream hooks or events based on its execution.\n\n**Technical details**\n- Typically referenced by name as a string.\n- Execution context must have access to the function definition.\n- May support both synchronous and asynchronous invocation patterns depending on the platform."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the submission process completes. This ID links the post-submit hook to a specific script that performs additional operations or custom logic once the main submission workflow has finished.\n\n**Field behavior**\n- Specifies which script is triggered automatically after the submission event.\n- Must correspond to a valid and existing script within the system.\n- Controls post-processing actions such as notifications, data transformations, or logging.\n\n**Implementation guidance**\n- Ensure the script ID is correctly referenced and the script is deployed before assigning this property.\n- Validate the script’s permissions and runtime environment compatibility.\n- Use consistent naming or ID conventions to avoid conflicts or errors.\n\n**Examples**\n- \"script_12345\"\n- \"postSubmitCleanupScript\"\n- \"notifyUserScript\"\n\n**Important notes**\n- If the script ID is invalid or missing, the post-submit hook will not execute.\n- Changes to the script ID require redeployment or configuration updates.\n- The script should be idempotent and handle errors gracefully to avoid disrupting the submission flow.\n\n**Dependency chain**\n- Depends on the existence of the script resource identified by this ID.\n- Linked to the post-submit hook configuration within the submission workflow.\n- May interact with other hooks or system components triggered after submission.\n\n**Technical details**\n- Typically represented as a string identifier.\n- Must be unique within the scope of available scripts.\n- Used by the system to locate and invoke the corresponding script at runtime."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-submit hook. This ID is used to reference and manage the specific stack instance within the system, enabling operations such as updates, deletions, or status checks after a submission event.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used to link post-submit actions to the correct stack.\n- Typically immutable once assigned to ensure consistent referencing.\n- Required for executing stack-specific post-submit logic.\n\n**Implementation guidance**\n- Ensure the ID is generated following the system’s unique identification standards.\n- Validate the presence and format of the stack ID before processing post-submit hooks.\n- Use this ID to fetch or manipulate stack-related data during post-submit operations.\n- Handle cases where the stack ID may not exist or is invalid gracefully.\n\n**Examples**\n- \"stack-12345\"\n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/abcde12345\"\n- \"proj-stack-v2\"\n\n**Important notes**\n- This ID must correspond to an existing stack within the environment.\n- Incorrect or missing stack IDs can cause post-submit hooks to fail.\n- The stack ID is critical for audit trails and debugging post-submit processes.\n\n**Dependency chain**\n- Depends on the stack creation or registration process to generate the ID.\n- Used by post-submit hook handlers to identify the target stack.\n- May be referenced by logging, monitoring, or notification systems post-submission.\n\n**Technical details**\n- Typically a string value.\n- May follow a specific format such as UUID, ARN, or custom naming conventions.\n- Stored and transmitted securely to prevent unauthorized access or manipulation.\n- Should be indexed in databases for efficient retrieval during post-submit operations."},"configuration":{"type":"object","description":"Configuration settings for the post-submit hook that define its behavior and parameters. This object contains key-value pairs specifying how the hook should operate after a submission event, including any necessary options, flags, or environment variables.\n\n**Field behavior**\n- Determines the specific actions and parameters used by the post-submit hook.\n- Can include nested settings tailored to the hook’s functionality.\n- Modifies the execution flow or output based on provided configuration values.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema for the post-submit hook.\n- Support extensibility to allow additional parameters without breaking existing functionality.\n- Ensure sensitive information within the configuration is handled securely.\n- Provide clear error messages if required configuration fields are missing or invalid.\n\n**Examples**\n- Setting a timeout duration for the post-submit process.\n- Specifying environment variables needed during hook execution.\n- Enabling or disabling certain features of the post-submit hook via flags.\n\n**Important notes**\n- The configuration must align with the capabilities of the post-submit hook implementation.\n- Incorrect or incomplete configuration may cause the hook to fail or behave unexpectedly.\n- Configuration changes typically require validation and testing before deployment.\n\n**Dependency chain**\n- Depends on the post-submit hook being triggered successfully.\n- May interact with other hooks or system components based on configuration.\n- Relies on the underlying system to interpret and apply configuration settings correctly.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Supports nested objects and arrays for complex configurations.\n- May include string, numeric, boolean, or null values depending on parameters.\n- Parsed and applied at runtime during the post-submit hook execution phase."}},"description":"A callback function or hook that is executed immediately after a form submission process completes. This hook allows for custom logic to be run post-submission, such as handling responses, triggering notifications, updating UI elements, or performing cleanup tasks.\n\n**Field behavior**\n- Invoked only after the form submission has finished, regardless of success or failure.\n- Receives submission result data or error information as arguments.\n- Can be asynchronous to support operations like API calls or state updates.\n- Does not affect the submission process itself but handles post-submission side effects.\n\n**Implementation guidance**\n- Ensure the function handles both success and error scenarios gracefully.\n- Avoid long-running synchronous operations to prevent blocking the UI.\n- Use this hook to trigger any follow-up actions such as analytics tracking or user feedback.\n- Validate that the hook is defined before invocation to prevent runtime errors.\n\n**Examples**\n- Logging submission results to the console.\n- Displaying a success message or error notification to the user.\n- Redirecting the user to a different page after submission.\n- Resetting form fields or updating application state based on submission outcome.\n\n**Important notes**\n- This hook is optional; if not provided, no post-submission actions will be performed.\n- It should not be used to modify the submission data itself; that should be handled before submission.\n- Proper error handling within this hook is crucial to avoid unhandled exceptions.\n\n**Dependency chain**\n- Depends on the form submission process completing.\n- May interact with state management or UI components updated after submission.\n- Can be linked with pre-submit hooks for comprehensive form lifecycle management.\n\n**Technical details**\n- Typically implemented as a function accepting parameters such as submission response and error.\n- Can return a promise to support asynchronous operations.\n- Should be registered in the form configuration under the hooks.postSubmit property.\n- Execution context may vary depending on the form library or framework used."},"postAggregate":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the post-aggregation hook phase. This function is invoked after the aggregation process completes, allowing for custom processing, transformation, or validation of the aggregated data before it is returned or further processed. The function should be designed to handle the aggregated data structure and can modify, enrich, or analyze the results as needed.\n\n**Field behavior**\n- Executed after the aggregation operation finishes.\n- Receives the aggregated data as input.\n- Can modify or augment the aggregated results.\n- Supports custom logic tailored to specific post-processing requirements.\n\n**Implementation guidance**\n- Ensure the function signature matches the expected input and output formats.\n- Handle potential errors within the function to avoid disrupting the overall aggregation flow.\n- Keep the function efficient to minimize latency in the post-aggregation phase.\n- Validate the output of the function to maintain data integrity.\n\n**Examples**\n- A function that filters aggregated results based on certain criteria.\n- A function that calculates additional metrics from the aggregated data.\n- A function that formats or restructures the aggregated output for downstream consumption.\n\n**Important notes**\n- The function must be deterministic and side-effect free to ensure consistent results.\n- Avoid long-running operations within the function to prevent performance bottlenecks.\n- The function should not alter the original data source, only the aggregated output.\n\n**Dependency chain**\n- Depends on the completion of the aggregation process.\n- May rely on the schema or structure of the aggregated data.\n- Can be linked to subsequent hooks or processing steps that consume the modified data.\n\n**Technical details**\n- Typically implemented as a callback or lambda function.\n- May be written in the same language as the aggregation engine or as a supported scripting language.\n- Should conform to the API’s expected interface for hook functions.\n- Execution context may include metadata about the aggregation operation."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the aggregation process completes. This ID links the post-aggregation hook to a specific script that contains the logic or operations to be performed. It ensures that the correct script is triggered in the post-aggregation phase, enabling customized processing or data manipulation.\n\n**Field behavior**\n- Accepts a string representing the script's unique ID.\n- Used exclusively in the post-aggregation hook context.\n- Triggers the execution of the associated script after aggregation.\n- Must correspond to an existing and accessible script within the system.\n\n**Implementation guidance**\n- Validate that the script ID exists and is active before assignment.\n- Ensure the script linked by this ID is compatible with post-aggregation data.\n- Handle errors gracefully if the script ID is invalid or the script execution fails.\n- Maintain security by restricting script IDs to authorized scripts only.\n\n**Examples**\n- \"script12345\"\n- \"postAggScript_v2\"\n- \"cleanup_after_aggregation\"\n\n**Important notes**\n- The script ID must be unique within the scope of available scripts.\n- Changing the script ID will alter the behavior of the post-aggregation hook.\n- The script referenced should be optimized for performance to avoid delays.\n- Ensure proper permissions are set for the script to execute in this context.\n\n**Dependency chain**\n- Depends on the existence of a script repository or registry.\n- Relies on the aggregation process completing successfully before execution.\n- Interacts with the postAggregate hook mechanism to trigger execution.\n\n**Technical details**\n- Typically represented as a string data type.\n- May be stored in a database or configuration file referencing script metadata.\n- Used by the system's execution engine to locate and run the script.\n- Supports integration with scripting languages or environments supported by the platform."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-aggregation hook. This ID is used to reference and manage the specific stack context within which the hook operates, ensuring that the correct stack resources and configurations are applied during the post-aggregation process.\n\n**Field behavior**\n- Uniquely identifies the stack instance related to the post-aggregation hook.\n- Used to link the hook execution context to the appropriate stack environment.\n- Immutable once set to maintain consistency throughout the hook lifecycle.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be assigned automatically by the system or provided explicitly during hook configuration.\n- Ensure proper validation to prevent referencing non-existent or unauthorized stacks.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for resolving stack-specific resources and permissions.\n- Incorrect or missing _stackId values can lead to hook execution failures.\n- Should be handled securely to prevent unauthorized access to stack data.\n\n**Dependency chain**\n- Depends on the stack management system to generate and maintain stack IDs.\n- Utilized by the post-aggregation hook execution engine to retrieve stack context.\n- May influence downstream processes that rely on stack-specific configurations.\n\n**Technical details**\n- Typically represented as a string.\n- Format and length may vary depending on the stack management system.\n- Should be indexed or cached for efficient lookup during hook execution."},"configuration":{"type":"object","description":"Configuration settings for the post-aggregate hook that define its behavior and parameters during execution. This property allows customization of the hook's operation by specifying key-value pairs or structured data relevant to the hook's logic.\n\n**Field behavior**\n- Accepts structured data or key-value pairs that tailor the hook's functionality.\n- Influences how the post-aggregate hook processes data after aggregation.\n- Can be optional or required depending on the specific hook implementation.\n- Supports dynamic adjustment of hook behavior without code changes.\n\n**Implementation guidance**\n- Validate configuration inputs to ensure they conform to expected formats and types.\n- Document all configurable options clearly for users to understand their impact.\n- Allow for extensibility to accommodate future configuration parameters.\n- Ensure secure handling of configuration data to prevent injection or misuse.\n\n**Examples**\n- {\"threshold\": 10, \"mode\": \"strict\"}\n- {\"enableLogging\": true, \"retryCount\": 3}\n- {\"filters\": [\"status:active\", \"region:us-east\"]}\n\n**Important notes**\n- Incorrect configuration values may cause the hook to malfunction or produce unexpected results.\n- Configuration should be versioned or tracked to maintain consistency across deployments.\n- Sensitive information should not be included in configuration unless properly encrypted or secured.\n\n**Dependency chain**\n- Depends on the specific post-aggregate hook implementation to interpret configuration.\n- May interact with other hook properties such as conditions or triggers.\n- Configuration changes can affect downstream processing or output of the aggregation.\n\n**Technical details**\n- Typically represented as a JSON object or map structure.\n- Parsed and applied at runtime when the post-aggregate hook is invoked.\n- Supports nested structures for complex configuration scenarios.\n- Should be compatible with the overall API schema and validation rules."}},"description":"A hook function that is executed after the aggregate operation has completed. This hook allows for custom logic to be applied to the aggregated results before they are returned or further processed. It is typically used to modify, filter, or augment the aggregated data based on specific application requirements.\n\n**Field behavior**\n- Invoked immediately after the aggregate query execution.\n- Receives the aggregated results as input.\n- Can modify or replace the aggregated data.\n- Supports asynchronous operations if needed.\n- Does not affect the execution of the aggregate operation itself.\n\n**Implementation guidance**\n- Ensure the hook handles errors gracefully to avoid disrupting the response.\n- Use this hook to implement custom transformations or validations on aggregated data.\n- Avoid heavy computations to maintain performance.\n- Return the modified data or the original data if no changes are needed.\n- Consider security implications when modifying aggregated results.\n\n**Examples**\n- Filtering out sensitive fields from aggregated results.\n- Adding computed properties based on aggregation output.\n- Logging or auditing aggregated data for monitoring purposes.\n- Transforming aggregated data into a different format before sending to clients.\n\n**Important notes**\n- This hook is specific to aggregate operations and will not trigger on other query types.\n- Modifications in this hook do not affect the underlying database.\n- The hook should return the final data to be used downstream.\n- Properly handle asynchronous code to ensure the hook completes before response.\n\n**Dependency chain**\n- Triggered after the aggregate query execution phase.\n- Can influence the data passed to subsequent hooks or response handlers.\n- Dependent on the successful completion of the aggregate operation.\n\n**Technical details**\n- Typically implemented as a function accepting the aggregated results and context.\n- Supports both synchronous and asynchronous function signatures.\n- Receives parameters such as the aggregated data, query context, and hook metadata.\n- Expected to return the processed aggregated data or a promise resolving to it."}},"description":"Defines a collection of hook functions or callbacks that are triggered at specific points during the execution lifecycle of a process or operation. These hooks allow customization and extension of default behavior by executing user-defined logic before, after, or during certain events.\n\n**Field behavior**\n- Supports multiple hook functions, each associated with a particular event or lifecycle stage.\n- Hooks are invoked automatically by the system when their corresponding event occurs.\n- Can be used to modify input data, handle side effects, perform validations, or trigger additional processes.\n- Execution order of hooks may be sequential or parallel depending on implementation.\n\n**Implementation guidance**\n- Ensure hooks are registered with clear event names or identifiers.\n- Validate hook functions for correct signature and expected parameters.\n- Provide error handling within hooks to avoid disrupting the main process.\n- Document available hook points and expected behavior for each.\n- Allow hooks to be asynchronous if the environment supports it.\n\n**Examples**\n- A \"beforeSave\" hook that validates data before saving to a database.\n- An \"afterFetch\" hook that formats or enriches data after retrieval.\n- A \"beforeDelete\" hook that checks permissions before allowing deletion.\n- A \"onError\" hook that logs errors or sends notifications.\n\n**Important notes**\n- Hooks should not introduce significant latency or side effects that impact core functionality.\n- Avoid circular dependencies or infinite loops caused by hooks triggering each other.\n- Security considerations must be taken into account when executing user-defined hooks.\n- Hooks may have access to sensitive data; ensure proper access controls.\n\n**Dependency chain**\n- Hooks depend on the lifecycle events or triggers defined by the system.\n- Hook execution may depend on the successful completion of prior steps.\n- Hook results can influence subsequent processing stages or final outcomes.\n\n**Technical details**\n- Typically implemented as functions, methods, or callbacks registered in a map or list.\n- May support synchronous or asynchronous execution models.\n- Can accept parameters such as context objects, event data, or state information.\n- Return values from hooks may be used to modify behavior or data flow.\n- Often integrated with event emitter or observer patterns."},"sampleResponseData":{"type":"object","description":"Contains example data representing a typical response returned by the API endpoint. This sample response data helps developers understand the structure, format, and types of values they can expect when interacting with the API. It serves as a reference for testing, debugging, and documentation purposes.\n\n**Field behavior**\n- Represents a static example of the API's response payload.\n- Should reflect the most common or typical response scenario.\n- May include nested objects, arrays, and various data types to illustrate the response structure.\n- Does not affect the actual API behavior or response generation.\n\n**Implementation guidance**\n- Ensure the sample data is accurate and up-to-date with the current API response schema.\n- Include all required fields and typical optional fields to provide a comprehensive example.\n- Use realistic values that clearly demonstrate the data format and constraints.\n- Update the sample response whenever the API response structure changes.\n\n**Examples**\n- A JSON object showing user details returned from a user info endpoint.\n- An array of product items with fields like id, name, price, and availability.\n- A nested object illustrating a complex response with embedded metadata and links.\n\n**Important notes**\n- This data is for illustrative purposes only and should not be used as actual input or output.\n- It helps consumers of the API to quickly grasp the expected response format.\n- Should be consistent with the API specification and schema definitions.\n\n**Dependency chain**\n- Depends on the API endpoint's response schema and data model.\n- Should align with any validation rules or constraints defined for the response.\n- May be linked to example requests or other documentation elements for completeness.\n\n**Technical details**\n- Typically formatted as a JSON or XML snippet matching the API's response content type.\n- May include placeholder values or sample identifiers.\n- Should be parsable and valid according to the API's response schema."},"responseTransform":{"allOf":[{"description":"Data transformation configuration for reshaping API response data during import operations.\n\n**Import-specific behavior**\n\n**Response Transformation**: Transforms the response data returned by the destination system after records have been imported. This allows you to reshape, filter, or enrich the response before it is processed by downstream flow steps or returned to the client.\n\n**Common Use Cases**:\n- Extracting relevant fields from verbose API responses\n- Normalizing response formats across different destination systems\n- Adding computed fields based on response data\n- Filtering out sensitive information before logging or further processing\n"},{"$ref":"#/components/schemas/Transform"}]},"modelMetadata":{"type":"object","description":"Contains detailed metadata about the model, including its version, architecture, training data characteristics, and any relevant configuration parameters that define its behavior and capabilities. This information helps in understanding the model's provenance, performance expectations, and compatibility with various tasks or environments.\n**Field behavior**\n- Captures comprehensive details about the model's identity and configuration.\n- May include version numbers, architecture type, training dataset descriptions, and hyperparameters.\n- Used for tracking model lineage and ensuring reproducibility.\n- Supports validation and compatibility checks before deployment or usage.\n**Implementation guidance**\n- Ensure metadata is accurate and up-to-date with the current model iteration.\n- Include standardized fields for easy parsing and comparison across models.\n- Use consistent formatting and naming conventions for metadata attributes.\n- Allow extensibility to accommodate future metadata elements without breaking compatibility.\n**Examples**\n- Model version: \"v1.2.3\"\n- Architecture: \"Transformer-based, 12 layers, 768 hidden units\"\n- Training data: \"Dataset XYZ, 1 million samples, balanced classes\"\n- Hyperparameters: \"learning_rate=0.001, batch_size=32\"\n**Important notes**\n- Metadata should be immutable once the model is finalized to ensure traceability.\n- Sensitive information such as proprietary training data details should be handled carefully.\n- Metadata completeness directly impacts model governance and audit processes.\n- Incomplete or inaccurate metadata can lead to misuse or misinterpretation of the model.\n**Dependency chain**\n- Relies on accurate model training and version control systems.\n- Interacts with deployment pipelines that validate model compatibility.\n- Supports monitoring systems that track model performance over time.\n**Technical details**\n- Typically represented as a structured object or JSON document.\n- May include nested fields for complex metadata attributes.\n- Should be easily serializable and deserializable for storage and transmission.\n- Can be linked to external documentation or repositories for extended information."},"mapping":{"type":"object","properties":{"fields":{"type":"string","enum":["string","number","boolean","numberarray","stringarray","json"],"description":"Defines the collection of individual field mappings within the overall mapping configuration. Each entry specifies the characteristics, data types, and indexing options for a particular field in the dataset or document structure. This property enables precise control over how each field is interpreted, stored, and queried by the system.\n\n**Field behavior**\n- Contains a set of key-value pairs where each key is a field name and the value is its mapping definition.\n- Determines how data in each field is processed, indexed, and searched.\n- Supports nested fields and complex data structures.\n- Can include settings such as data type, analyzers, norms, and indexing options.\n\n**Implementation guidance**\n- Define all relevant fields explicitly to optimize search and storage behavior.\n- Use consistent naming conventions for field names.\n- Specify appropriate data types to ensure correct parsing and querying.\n- Include nested mappings for objects or arrays as needed.\n- Validate field definitions to prevent conflicts or errors.\n\n**Examples**\n- Mapping a text field with a custom analyzer.\n- Defining a date field with a specific format.\n- Specifying a keyword field for exact match searches.\n- Creating nested object fields with their own sub-fields.\n\n**Important notes**\n- Omitting fields may lead to default dynamic mapping behavior, which might not be optimal.\n- Incorrect field definitions can cause indexing errors or unexpected query results.\n- Changes to field mappings often require reindexing of existing data.\n- Field names should avoid reserved characters or keywords.\n\n**Dependency chain**\n- Depends on the overall mapping configuration context.\n- Influences indexing and search components downstream.\n- Interacts with analyzers, tokenizers, and query parsers.\n\n**Technical details**\n- Typically represented as a JSON or YAML object with field names as keys.\n- Each field mapping includes properties like \"type\", \"index\", \"analyzer\", \"fields\", etc.\n- Supports complex types such as objects, nested, geo_point, and geo_shape.\n- May include metadata fields for internal use."},"lists":{"type":"string","enum":["string","number","boolean","numberarray","stringarray"],"description":"A collection of lists that define specific groupings or categories used within the mapping context. Each list contains a set of related items or values that are referenced to organize, filter, or map data effectively. These lists facilitate structured data handling and improve the clarity and maintainability of the mapping configuration.\n\n**Field behavior**\n- Contains multiple named lists, each representing a distinct category or grouping.\n- Lists can be referenced elsewhere in the mapping to apply consistent logic or transformations.\n- Supports dynamic or static content depending on the mapping requirements.\n- Enables modular and reusable data definitions within the mapping.\n\n**Implementation guidance**\n- Ensure each list has a unique identifier or name for clear referencing.\n- Populate lists with relevant and validated items to avoid mapping errors.\n- Use lists to centralize repeated values or categories to simplify updates.\n- Consider the size and complexity of lists to maintain performance and readability.\n\n**Examples**\n- A list of country codes used for regional mapping.\n- A list of product categories for classification purposes.\n- A list of status codes to standardize state representation.\n- A list of user roles for access control mapping.\n\n**Important notes**\n- Lists should be kept up-to-date to reflect current data requirements.\n- Avoid duplication of items across different lists unless intentional.\n- The structure and format of list items must align with the overall mapping schema.\n- Changes to lists may impact dependent mapping logic; test thoroughly after updates.\n\n**Dependency chain**\n- Lists may depend on external data sources or configuration files.\n- Other mapping properties or rules may reference these lists for validation or transformation.\n- Updates to lists can cascade to affect downstream processing or output.\n\n**Technical details**\n- Typically represented as arrays or collections within the mapping schema.\n- Items within lists can be simple values (strings, numbers) or complex objects.\n- Supports nesting or hierarchical structures if the schema allows.\n- May include metadata or annotations to describe list purpose or usage."}},"description":"Defines the association between input data fields and their corresponding target fields or processing rules within the system. This mapping specifies how data should be transformed, routed, or interpreted during processing to ensure accurate and consistent handling. It serves as a blueprint for data translation and integration tasks, enabling seamless interoperability between different data formats or components.\n\n**Field behavior**\n- Maps source fields to target fields or processing instructions.\n- Determines how input data is transformed or routed.\n- Can include nested or hierarchical mappings for complex data structures.\n- Supports conditional or dynamic mapping rules based on input values.\n\n**Implementation guidance**\n- Ensure mappings are clearly defined and validated to prevent data loss or misinterpretation.\n- Use consistent naming conventions for source and target fields.\n- Support extensibility to accommodate new fields or transformation rules.\n- Provide mechanisms for error handling when mappings fail or are incomplete.\n\n**Examples**\n- Mapping a JSON input field \"user_name\" to a database column \"username\".\n- Defining a transformation rule that converts date formats from \"MM/DD/YYYY\" to \"YYYY-MM-DD\".\n- Routing data from an input field \"status\" to different processing modules based on its value.\n\n**Important notes**\n- Incorrect or incomplete mappings can lead to data corruption or processing errors.\n- Mapping definitions should be version-controlled to track changes over time.\n- Consider performance implications when applying complex or large-scale mappings.\n\n**Dependency chain**\n- Dependent on the structure and schema of input data.\n- Influences downstream data processing, validation, and storage components.\n- May interact with transformation, validation, and routing modules.\n\n**Technical details**\n- Typically represented as key-value pairs, dictionaries, or mapping objects.\n- May support expressions or functions for dynamic value computation.\n- Can be serialized in formats such as JSON, YAML, or XML for configuration.\n- Should include metadata for data types, optionality, and default values where applicable."},"mappings":{"allOf":[{"description":"Field mapping configuration for transforming incoming records during import operations.\n\n**Import-specific behavior**\n\n**Data Transformation**: Mappings define how fields from incoming records are transformed and mapped to the destination system's field structure. This enables data normalization, field renaming, value transformation, and complex nested object construction.\n\n**Common Use Cases**:\n- Mapping source field names to destination field names\n- Transforming data types and formats\n- Building nested object structures required by the destination\n- Applying formulas and lookups to derive field values\n"},{"$ref":"#/components/schemas/Mappings"}]},"lookups":{"type":"string","description":"A collection of predefined reference data or mappings used to standardize and validate input values within the system. Lookups typically consist of key-value pairs or enumerations that facilitate consistent data interpretation and reduce errors by providing a controlled vocabulary or set of options.\n\n**Field behavior**\n- Serves as a centralized source for reference data used across various components.\n- Enables validation and normalization of input values against predefined sets.\n- Supports dynamic retrieval of lookup values to accommodate updates without code changes.\n- May include hierarchical or categorized data structures for complex reference data.\n\n**Implementation guidance**\n- Ensure lookups are comprehensive and cover all necessary reference data for the application domain.\n- Design lookups to be easily extendable and maintainable, allowing additions or modifications without impacting existing functionality.\n- Implement caching mechanisms if lookups are frequently accessed to optimize performance.\n- Provide clear documentation for each lookup entry to aid developers and users in understanding their purpose.\n\n**Examples**\n- Country codes and names (e.g., \"US\" -> \"United States\").\n- Status codes for order processing (e.g., \"PENDING\", \"SHIPPED\", \"DELIVERED\").\n- Currency codes and symbols (e.g., \"USD\" -> \"$\").\n- Product categories or types used in an inventory system.\n\n**Important notes**\n- Lookups should be kept up-to-date to reflect changes in business rules or external standards.\n- Avoid hardcoding lookup values in multiple places; centralize them to maintain consistency.\n- Consider localization requirements if lookup values need to be presented in multiple languages.\n- Validate input data against lookups to prevent invalid or unsupported values from entering the system.\n\n**Dependency chain**\n- Often used by input validation modules, UI dropdowns, reporting tools, and business logic components.\n- May depend on external data sources or configuration files for initialization.\n- Can influence downstream processing and data storage by enforcing standardized values.\n\n**Technical details**\n- Typically implemented as dictionaries, maps, or database tables.\n- May support versioning to track changes over time.\n- Can be exposed via APIs for dynamic retrieval by client applications.\n- Should include mechanisms for synchronization if distributed across multiple systems."},"settingsForm":{"$ref":"#/components/schemas/Form"},"preSave":{"$ref":"#/components/schemas/PreSave"},"settings":{"$ref":"#/components/schemas/Settings"}}},"As2":{"type":"object","description":"Configuration for As2 imports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"The unique identifier for the Trading Partner (TP) Connector utilized within the AS2 communication framework. This identifier serves as a critical reference that links the AS2 configuration to the specific connector responsible for the secure transmission and reception of AS2 messages. It ensures that all message exchanges are accurately routed through the designated connector, thereby maintaining the integrity, security, and reliability of the communication process. This ID is fundamental for establishing, managing, and terminating AS2 sessions, enabling proper encryption, digital signing, and delivery confirmation of messages. It acts as a key element in the orchestration of AS2 workflows, ensuring seamless interoperability between trading partners.\n\n**Field behavior**\n- Uniquely identifies a TP Connector within the trading partner ecosystem.\n- Associates AS2 message exchanges with the appropriate connector configuration.\n- Essential for initiating, maintaining, and terminating AS2 communication sessions.\n- Immutable once set to preserve consistent routing and session management.\n- Validated against existing connectors to prevent misconfiguration.\n\n**Implementation guidance**\n- Must correspond to a valid, active connector ID registered in the trading partner management system.\n- Should be assigned during initial AS2 setup and remain unchanged unless a deliberate reconfiguration is required.\n- Implement validation checks to confirm the ID’s existence and compatibility with AS2 protocols before assignment.\n- Ensure synchronization between the connector ID and its configuration to avoid communication failures.\n- Handle updates cautiously, with appropriate notifications and fallback mechanisms to maintain session continuity.\n\n**Examples**\n- \"connector-12345\"\n- \"tp-connector-67890\"\n- \"as2-connector-001\"\n\n**Important notes**\n- Incorrect or invalid IDs can lead to failed message transmissions or misrouted communications.\n- The referenced connector must be fully configured to support AS2 standards, including security and protocol settings.\n- Changes to this identifier should be managed through controlled processes to prevent disruption of active AS2 sessions.\n- This ID is integral to the AS2 message lifecycle, impacting message encryption, signing, and delivery confirmation.\n\n**Dependency chain**\n- Relies on the existence and proper configuration of a TP Connector entity within the trading partner management system.\n- Interacts closely with AS2 configuration parameters that govern message handling, security credentials, and session management.\n- Dependent on system components responsible for connector registration, validation, and lifecycle management.\n\n**Technical details**\n- Represented as a string conforming to the identifier format defined by the trading partner management system.\n- Used internally by the AS2"},"fileNameTemplate":{"type":"string","description":"fileNameTemplate specifies the template string used to dynamically generate file names for AS2 message payloads during both transmission and receipt processes. This template allows the inclusion of customizable placeholders that are replaced at runtime with relevant contextual values such as timestamps, unique message identifiers, sender and receiver IDs, or other metadata. By leveraging these placeholders, the system ensures that generated file names are unique, descriptive, and meaningful, facilitating efficient file management, traceability, and downstream processing. The template is crucial for systematically organizing files, preventing naming conflicts, supporting audit trails, and enabling seamless integration with various file storage and transfer mechanisms.\n\n**Field behavior**\n- Defines the naming pattern and structure for AS2 message payload files.\n- Supports dynamic substitution of placeholders with runtime metadata values.\n- Ensures uniqueness and clarity in generated file names to prevent collisions.\n- Directly influences file storage organization, retrieval efficiency, and processing workflows.\n- Affects audit trails and logging by enforcing consistent and meaningful file naming conventions.\n- Applies uniformly to both outgoing and incoming AS2 message payloads to maintain consistency.\n- Allows flexibility to adapt file naming to organizational standards and compliance requirements.\n\n**Implementation guidance**\n- Use clear, descriptive placeholders such as {timestamp}, {messageId}, {senderId}, {receiverId}, and {date} to capture relevant metadata.\n- Validate templates to exclude invalid or unsupported characters based on the target file system’s constraints.\n- Incorporate date and time components to improve uniqueness and enable chronological sorting of files.\n- Explicitly include file extensions (e.g., .xml, .edi, .txt) to reflect the payload format and ensure compatibility with downstream systems.\n- Provide sensible default templates to maintain system functionality when custom templates are not specified.\n- Implement robust parsing and substitution logic to reliably handle all supported placeholders and edge cases.\n- Sanitize substituted values to prevent injection of unsafe, invalid, or reserved characters that could cause errors.\n- Consider locale and timezone settings when generating date/time placeholders to ensure consistency across systems.\n- Support escape sequences or delimiter customization to handle special characters within templates if necessary.\n\n**Examples**\n- \"AS2_{senderId}_{timestamp}.xml\"\n- \"{messageId}_{receiverId}_{date}.edi\"\n- \"payload_{timestamp}.txt\"\n- \"msg_{date}_{senderId}_{receiverId}_{messageId}.xml\"\n- \"AS2_{timestamp}_{messageId}.payload\"\n- \"inbound_{receiverId}_{date}_{messageId}.xml\"\n-"},"messageIdTemplate":{"type":"string","description":"messageIdTemplate is a customizable template string designed to generate the Message-ID header for AS2 messages, enabling the creation of unique, meaningful, and standards-compliant identifiers for each message. This template supports dynamic placeholders that are replaced with actual runtime values—such as timestamps, UUIDs, sequence numbers, sender-specific information, or other contextual data—ensuring that every Message-ID is globally unique and strictly adheres to RFC 5322 email header formatting standards. By defining the precise format of the Message-ID, this property plays a critical role in message tracking, logging, troubleshooting, auditing, and interoperability within AS2 communication workflows, facilitating reliable message correlation and lifecycle management.\n\n**Field behavior**\n- Specifies the exact format and structure of the Message-ID header in AS2 protocol messages.\n- Supports dynamic placeholders that are substituted with real-time values during message creation.\n- Guarantees global uniqueness of each Message-ID to prevent duplication and enable accurate message identification.\n- Directly influences message correlation, tracking, diagnostics, and auditing processes across AS2 systems.\n- Ensures compliance with email header syntax and AS2 protocol requirements to maintain interoperability.\n- Affects downstream processing by systems that rely on Message-ID for message lifecycle management.\n\n**Implementation guidance**\n- Use a clear and consistent placeholder syntax (e.g., `${placeholderName}`) for dynamic content insertion.\n- Incorporate unique elements such as UUIDs, timestamps, sequence numbers, sender domains, or other identifiers to guarantee global uniqueness.\n- Validate the generated Message-ID string against AS2 protocol specifications and RFC 5322 email header standards.\n- Ensure the final Message-ID is properly formatted, enclosed in angle brackets (`< >`), and free from invalid or disallowed characters.\n- Avoid embedding sensitive or confidential information within the Message-ID to mitigate potential security risks.\n- Thoroughly test the template to confirm compatibility with receiving AS2 systems and message processing components.\n- Consider the impact of the template on downstream systems that rely on Message-ID for message correlation and tracking.\n- Maintain consistency in template usage across environments to support reliable message auditing and troubleshooting.\n\n**Examples**\n- `<${uuid}@example.com>` — generates a Message-ID combining a UUID with a domain name.\n- `<${timestamp}.${sequence}@as2sender.com>` — includes a timestamp and sequence number for ordered uniqueness.\n- `<msg-${messageNumber}@${senderDomain}>` — uses message number and sender domain placeholders for traceability.\n- `<${date}-${uuid}@company"},"maxRetries":{"type":"number","description":"Maximum number of retry attempts for the AS2 message transmission in case of transient failures such as network errors, timeouts, or temporary unavailability of the recipient system. This setting controls how many times the system will automatically attempt to resend a message after an initial failure before marking the transmission as failed. It applies exclusively to retry attempts following the first transmission and does not influence the initial send operation.\n\n**Field behavior**\n- Specifies the total count of resend attempts after the initial transmission failure.\n- Retries are triggered only for recoverable, transient errors where subsequent attempts have a reasonable chance of success.\n- Retry attempts cease once the configured maximum number is reached, resulting in a final failure status.\n- Does not affect the initial message send; governs only the retry logic after the first failure.\n- Retry attempts are typically spaced out using backoff strategies to prevent overwhelming the network or recipient system.\n- Each retry is initiated only after the previous attempt has conclusively failed.\n- The retry mechanism respects configured backoff intervals and timing constraints to optimize network usage and avoid congestion.\n\n**Implementation guidance**\n- Select a balanced default value to avoid excessive retries that could cause delays or strain system resources.\n- Implement exponential backoff or incremental delay strategies between retries to reduce network congestion and improve success rates.\n- Ensure retry attempts comply with overall operation timeouts and do not exceed system or protocol limits.\n- Validate input to accept only non-negative integers; setting zero disables retries entirely.\n- Integrate retry logic with error detection, logging, and alerting mechanisms for comprehensive failure handling and monitoring.\n- Consider the impact of retries on message ordering and idempotency to prevent duplicate processing or inconsistent states.\n- Provide clear feedback and detailed logging for each retry attempt to facilitate troubleshooting and operational transparency.\n\n**Examples**\n- maxRetries: 3 — The system will attempt to resend the message up to three times after the initial failure before giving up.\n- maxRetries: 0 — No retries will be performed; the failure is reported immediately after the first unsuccessful attempt.\n- maxRetries: 5 — Allows up to five retry attempts, increasing the chance of successful delivery in unstable network conditions.\n- maxRetries: 1 — A single retry attempt is made, suitable for environments with low tolerance for delays.\n\n**Important notes**\n- Excessive retry attempts can increase latency and consume additional system and network resources.\n- Retry behavior should align with AS2 protocol standards and best practices to maintain compliance and interoperability.\n- This setting improves"},"headers":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the MIME type of the AS2 message content, indicating the format of the data being transmitted. This helps the receiving system understand how to process and interpret the message payload.\n\n**Field behavior**\n- Defines the content type of the AS2 message payload.\n- Used by the receiver to determine how to parse and handle the message.\n- Typically corresponds to standard MIME types such as \"application/edi-x12\", \"application/edi-consent\", or \"application/xml\".\n- May influence processing rules, such as encryption, compression, or validation steps.\n\n**Implementation guidance**\n- Ensure the value is a valid MIME type string.\n- Use standard MIME types relevant to AS2 transmissions.\n- Avoid custom or non-standard MIME types unless both sender and receiver explicitly support them.\n- Validate the type value against expected content formats in the AS2 communication agreement.\n\n**Examples**\n- \"application/edi-x12\"\n- \"application/edi-consent\"\n- \"application/xml\"\n- \"text/plain\"\n\n**Important notes**\n- The type field is critical for interoperability between AS2 trading partners.\n- Incorrect or missing type values can lead to message processing errors or rejection.\n- This field is part of the AS2 message headers and should conform to AS2 protocol specifications.\n\n**Dependency chain**\n- Depends on the content of the AS2 message payload.\n- Influences downstream processing components that handle message parsing and validation.\n- May be linked with other headers such as content-transfer-encoding or content-disposition.\n\n**Technical details**\n- Represented as a string in MIME type format.\n- Included in the AS2 message headers under the \"Content-Type\" header.\n- Should comply with RFC 2045 and RFC 2046 MIME standards."},"value":{"type":"string","description":"The value of the AS2 header, representing the content associated with the specified header name.\n\n**Field behavior**\n- Holds the actual content or data for the AS2 header identified by the corresponding header name.\n- Can be a string or any valid data type supported by the AS2 protocol for header values.\n- Used during the construction or parsing of AS2 messages to specify header details.\n\n**Implementation guidance**\n- Ensure the value conforms to the expected format and encoding for AS2 headers.\n- Validate the value against any protocol-specific constraints or length limits.\n- When setting this value, consider the impact on message integrity and compliance with AS2 standards.\n- Support dynamic assignment to accommodate varying header requirements.\n\n**Examples**\n- \"application/edi-x12\"\n- \"12345\"\n- \"attachment; filename=\\\"invoice.xml\\\"\"\n- \"UTF-8\"\n\n**Important notes**\n- The value must be compatible with the corresponding header name to maintain protocol correctness.\n- Incorrect or malformed values can lead to message rejection or processing errors.\n- Some headers may require specific formatting or encoding (e.g., MIME types, character sets).\n\n**Dependency chain**\n- Dependent on the `as2.headers.name` property which specifies the header name.\n- Influences the overall AS2 message headers structure and processing logic.\n\n**Technical details**\n- Typically represented as a string in the AS2 message headers.\n- May require encoding or escaping depending on the header content.\n- Should comply with RFC 4130 and related AS2 specifications for header formatting."},"_id":{"type":"object","description":"Unique identifier for the AS2 message header.\n\n**Field behavior**\n- Serves as a unique key to identify the AS2 message header within the system.\n- Typically assigned automatically by the system or provided by the client to ensure message traceability.\n- Used to correlate messages and track message processing status.\n\n**Implementation guidance**\n- Should be a string value that uniquely identifies the header.\n- Must be immutable once assigned to prevent inconsistencies.\n- Should follow a consistent format (e.g., UUID, GUID) to avoid collisions.\n- Ensure uniqueness across all AS2 message headers in the system.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"AS2Header_001\"\n- \"msg-header-20240427-0001\"\n\n**Important notes**\n- This field is critical for message tracking and auditing.\n- Avoid using easily guessable or sequential values to enhance security.\n- If not provided, the system should generate a unique identifier automatically.\n\n**Dependency chain**\n- May be referenced by other components or logs that track message processing.\n- Related to message metadata and processing status fields.\n\n**Technical details**\n- Data type: string\n- Format: typically UUID or a unique string identifier\n- Immutable after creation\n- Indexed for efficient lookup in databases or message stores"},"default":{"type":["object","null"],"description":"A boolean property that specifies whether the current header should be used as the default header in the AS2 message configuration. When set to true, this header will be applied automatically unless overridden by a more specific header setting.\n\n**Field behavior**\n- Determines if the header is the default choice for AS2 message headers.\n- If true, this header is applied automatically in the absence of other specific header configurations.\n- If false or omitted, the header is not considered the default and must be explicitly specified to be used.\n\n**Implementation guidance**\n- Use this property to designate a fallback or standard header configuration for AS2 messages.\n- Ensure only one header is marked as default to avoid conflicts.\n- Validate the boolean value to accept only true or false.\n- Consider the impact on message routing and processing when setting this property.\n\n**Examples**\n- `default: true` — This header is the default and will be used unless another header is specified.\n- `default: false` — This header is not the default and must be explicitly selected.\n\n**Important notes**\n- Setting multiple headers as default may lead to unpredictable behavior.\n- The default header is applied only when no other header overrides are present.\n- This property is optional; if omitted, the header is not treated as default.\n\n**Dependency chain**\n- Related to other header properties within `as2.headers`.\n- Influences message header selection logic in AS2 message processing.\n- May interact with routing or profile configurations that specify header usage.\n\n**Technical details**\n- Data type: boolean\n- Default value: false (if omitted)\n- Located under the `as2.headers` object in the configuration schema\n- Used during AS2 message construction to determine header application"}},"description":"A collection of HTTP headers included in the AS2 message transmission, serving as essential metadata and control information that governs the accurate handling, routing, authentication, and processing of the AS2 message between trading partners. These headers must strictly adhere to standard HTTP header formats and encoding rules, encompassing mandatory fields such as Content-Type, Message-ID, AS2-From, AS2-To, and other protocol-specific headers critical for AS2 communication. Proper configuration and validation of these headers are vital to maintaining message integrity, security (including encryption and digital signatures), and ensuring successful interoperability within the AS2 ecosystem. This collection supports both standard and custom headers as required by specific trading partner agreements or business needs, with header names treated as case-insensitive per HTTP standards.  \n**Field behavior:**  \n- Represents key-value pairs of HTTP headers transmitted alongside the AS2 message.  \n- Directly influences message routing, identification, authentication, security, and processing by the receiving system.  \n- Supports inclusion of both mandatory AS2 headers and additional custom headers as dictated by trading partner agreements or specific business requirements.  \n- Header names are case-insensitive, consistent with HTTP/1.1 specifications.  \n- Facilitates communication of protocol-specific instructions, security parameters (e.g., MIC, Disposition-Notification-To), and message metadata essential for AS2 transactions.  \n- Enables conveying information necessary for message encryption, signing, and receipt acknowledgments.  \n**Implementation guidance:**  \n- Ensure all header names and values conform strictly to HTTP/1.1 header syntax, character encoding, and escaping rules.  \n- Include all mandatory AS2 headers such as AS2-From, AS2-To, Message-ID, Content-Type, and security-related headers like MIC and Disposition-Notification-To.  \n- Avoid duplicate headers unless explicitly allowed by the AS2 specification or trading partner agreements.  \n- Validate header values rigorously for correctness, format compliance, and security to prevent injection attacks or protocol violations.  \n- Consider the impact of headers on message security features including encryption, digital signatures, and receipt acknowledgments.  \n- Maintain consistency with trading partner agreements regarding required, optional, and custom headers to ensure seamless interoperability.  \n- Use appropriate character encoding and escape sequences to handle special or non-ASCII characters within header values.  \n**Examples:**  \n- Content-Type: application/edi-x12  \n- AS2-From: PartnerA  \n- AS2-To: PartnerB  \n- Message-ID: <"}}},"Dynamodb":{"type":"object","description":"Configuration for Dynamodb exports","properties":{"region":{"type":"string","description":"Specifies the AWS region where the DynamoDB service is hosted and where all database operations will be executed. This property defines the precise geographical location of the DynamoDB instance, which is essential for optimizing application latency, ensuring compliance with data residency and sovereignty regulations, and aligning with organizational infrastructure strategies. Selecting the appropriate region helps minimize network latency by placing data physically closer to end-users or application servers, thereby improving performance and responsiveness. Additionally, the chosen region influences service availability, pricing structures, feature sets, and legal compliance requirements, making it a critical configuration parameter. The region value must be a valid AWS region identifier (such as \"us-east-1\" or \"eu-west-1\") and is used to construct the correct service endpoint URL for all DynamoDB interactions. Proper configuration of this property is vital for seamless integration with AWS SDKs, authentication mechanisms, and IAM policies that may be region-specific, ensuring secure and efficient access to DynamoDB resources.\n\n**Field behavior**\n- Determines the AWS data center location for all DynamoDB operations.\n- Directly impacts network latency, data residency, and compliance adherence.\n- Must be specified as a valid and supported AWS region code.\n- Influences the endpoint URL, authentication endpoints, and service routing.\n- Affects service availability, pricing, feature availability, and latency.\n- Remains consistent throughout the lifecycle of the application unless explicitly changed.\n\n**Implementation guidance**\n- Validate the region against the current list of supported AWS regions to prevent misconfiguration and runtime errors.\n- Use the region value to dynamically construct the DynamoDB service endpoint URL and configure SDK clients accordingly.\n- Allow the region to be configurable and overrideable to support testing environments, multi-region deployments, disaster recovery, or failover scenarios.\n- Ensure consistency of the region setting across all AWS service configurations within the application to avoid cross-region conflicts and data inconsistency.\n- Consider the implications of region selection on data sovereignty laws, compliance requirements, and organizational policies.\n- Monitor AWS announcements for new regions or changes in service availability to keep configurations up to date.\n\n**Examples**\n- \"us-east-1\" (Northern Virginia, USA)\n- \"eu-west-1\" (Ireland, Europe)\n- \"ap-southeast-2\" (Sydney, Australia)\n- \"ca-central-1\" (Canada Central)\n- \"sa-east-1\" (São Paulo, Brazil)\n\n**Important notes**\n- Selecting the correct region is crucial for optimizing application performance, cost efficiency,"},"method":{"type":"string","enum":["putItem","updateItem"],"description":"Method specifies the HTTP method to be used for the DynamoDB API request, such as GET, POST, PUT, DELETE, etc. This determines the type of operation to be performed on the DynamoDB service and how the request is processed. It defines the HTTP verb that dictates the action on the resource, typically aligning with CRUD operations (Create, Read, Update, Delete). While DynamoDB primarily uses POST requests for most operations, other methods may be applicable in limited scenarios. The chosen method directly impacts the request structure, headers, payload formatting, and response handling, ensuring that the request conforms to the expected protocol and operation semantics.\n\n**Field behavior**\n- Defines the HTTP verb for the API call, influencing the nature and intent of the operation.\n- Determines how the DynamoDB service interprets and processes the incoming request.\n- Typically corresponds to CRUD operations but is predominantly POST for DynamoDB API interactions.\n- Must be compatible with the DynamoDB API endpoint and the specific operation requirements.\n- Influences the construction of the HTTP request line, headers, and payload format.\n- Affects response handling and error processing based on the HTTP method semantics.\n\n**Implementation guidance**\n- Validate that the method value is one of the HTTP methods supported by DynamoDB, with POST being the standard and recommended method.\n- Use POST for the majority of DynamoDB operations, including queries, updates, and inserts.\n- Ensure the selected method aligns precisely with the intended DynamoDB operation to avoid protocol mismatches.\n- Handle method-specific headers, payload formats, and any protocol nuances, such as content-type and authorization headers.\n- Avoid unsupported or non-standard HTTP methods to prevent request failures or unexpected behavior.\n- Implement robust error handling for cases where the method is incompatible with the requested operation.\n\n**Examples**\n- POST (commonly used and recommended for all DynamoDB API operations)\n- GET (rarely used, potentially for specific read-only or diagnostic operations, though not standard for DynamoDB)\n- DELETE (used for deleting items or tables in DynamoDB, though typically encapsulated within POST requests)\n- PUT (generally not used in the DynamoDB API context but common in other RESTful APIs)\n\n**Important notes**\n- DynamoDB API predominantly supports POST; other HTTP methods may be unsupported or have limited applicability.\n- Using an incorrect HTTP method can cause request rejection, errors, or unintended side effects.\n- The method must strictly adhere to the DynamoDB API specification and AWS service requirements to ensure successful communication"},"tableName":{"type":"string","description":"The name of the DynamoDB table to be accessed or manipulated, serving as the primary identifier for routing API requests to the correct data store. This string must exactly match the name of an existing DynamoDB table within the specified AWS account and region, including case sensitivity. It is essential for performing operations such as reading, writing, updating, or deleting items within the table, ensuring that all data interactions target the intended resource accurately.\n\n**Field behavior**\n- Specifies the exact DynamoDB table targeted for all data operations.\n- Must be unique within the AWS account and region to prevent ambiguity.\n- Directs API calls to the appropriate table for the requested action.\n- Case-sensitive and must strictly adhere to DynamoDB naming conventions.\n- Acts as a critical routing parameter in API requests to identify the data source.\n\n**Implementation guidance**\n- Confirm the table name matches the existing DynamoDB table in AWS, respecting case sensitivity.\n- Avoid using reserved words or disallowed special characters to prevent API errors.\n- Validate the table name format programmatically before making API calls.\n- Manage table names through environment variables or configuration files to facilitate deployment across multiple environments (development, staging, production).\n- Ensure all application components referencing the table name are updated consistently if the table name changes.\n- Incorporate error handling to manage cases where the specified table does not exist or is inaccessible.\n\n**Examples**\n- \"Users\"\n- \"Orders2024\"\n- \"Inventory_Table\"\n- \"CustomerData\"\n- \"Sales-Data.2024\"\n\n**Important notes**\n- DynamoDB table names can be up to 255 characters in length.\n- Table names are case-sensitive and must be used consistently across all API calls.\n- The specified table must exist in the targeted AWS region before any operation is performed.\n- Changing the table name requires updating all dependent code and redeploying applications as necessary.\n- Incorrect table names will result in API errors such as ResourceNotFoundException.\n\n**Dependency chain**\n- Dependent on the AWS account and region context where DynamoDB is accessed.\n- Requires appropriate IAM permissions to perform operations on the specified table.\n- Works in conjunction with other DynamoDB parameters such as key schema, attribute definitions, and provisioned throughput settings.\n- Relies on network connectivity and AWS service availability for successful API interactions.\n\n**Technical details**\n- Represented as a string value.\n- Must conform to DynamoDB naming rules: allowed characters include alphanumeric characters, underscore (_), hy"},"partitionKey":{"type":"string","description":"partitionKey specifies the attribute name used as the primary partition key in a DynamoDB table. This key is essential for uniquely identifying each item within the table and determines the partition where the data is physically stored. It plays a fundamental role in data distribution, scalability, and query efficiency by enabling DynamoDB to hash the key value and allocate items across multiple storage partitions. The partition key must be present in every item and is required for all operations involving item creation, retrieval, update, and deletion. Selecting an appropriate partition key with high cardinality ensures even data distribution, prevents performance bottlenecks, and supports efficient query patterns. When combined with an optional sort key, it forms a composite primary key that enables more complex data models and range queries.\n\n**Field behavior**\n- Defines the primary attribute used to partition and organize data in a DynamoDB table.\n- Must have unique values within the context of the partition to ensure item uniqueness.\n- Drives the distribution of data across partitions to optimize performance and scalability.\n- Required for all item-level operations such as put, get, update, and delete.\n- Works in tandem with an optional sort key to form a composite primary key for more complex data models.\n\n**Implementation guidance**\n- Select an attribute with a high cardinality (many distinct values) to promote even data distribution and avoid hot partitions.\n- Ensure the partition key attribute is included in every item stored in the table.\n- Analyze application access patterns to choose a partition key that supports efficient queries and minimizes read/write contention.\n- Use a composite key (partition key + sort key) when you need to model one-to-many relationships or perform range queries.\n- Avoid using attributes with low variability or skewed distributions as partition keys to prevent performance bottlenecks.\n\n**Examples**\n- \"userId\" for applications where data is organized by individual users.\n- \"orderId\" in an e-commerce system to uniquely identify each order.\n- \"deviceId\" for storing telemetry data from IoT devices.\n- \"customerEmail\" when email addresses serve as unique identifiers.\n- \"regionCode\" combined with a sort key for geographic data partitioning.\n\n**Important notes**\n- The partition key is mandatory and immutable after table creation; changing it requires creating a new table.\n- The attribute type must be one of DynamoDB’s supported scalar types: String, Number, or Binary.\n- Partition key values are hashed internally by DynamoDB to determine the physical partition.\n- Using a poorly chosen partition key"},"sortKey":{"type":"string","description":"The attribute name designated as the sort key (also known as the range key) in a DynamoDB table. This key is essential for ordering and organizing items that share the same partition key, enabling efficient sorting, range queries, and precise retrieval of data within a partition. Together with the partition key, the sort key forms the composite primary key that uniquely identifies each item in the table. It plays a critical role in query operations by allowing filtering, sorting, and conditional retrieval based on its values, which can be of type String, Number, or Binary. Proper definition and usage of the sort key significantly enhance data access patterns, query performance, and cost efficiency in DynamoDB.\n\n**Field behavior**\n- Defines the attribute DynamoDB uses to sort and organize items within the same partition key.\n- Enables efficient range queries, ordered retrieval, and filtering of items.\n- Must be unique in combination with the partition key for each item to ensure item uniqueness.\n- Supports composite key structures when combined with the partition key.\n- Influences the physical storage order of items within a partition, optimizing query performance.\n\n**Implementation guidance**\n- Specify the attribute name exactly as defined in the DynamoDB table schema.\n- Ensure the attribute type matches the table definition (String, Number, or Binary).\n- Use this key to perform queries requiring sorting, range-based filtering, or conditional retrieval.\n- Omit or set to null if the DynamoDB table does not define a sort key.\n- Validate that the sort key attribute exists and is properly indexed in the table schema before use.\n- Consider the sort key design carefully to support intended query patterns and access efficiency.\n\n**Examples**\n- \"timestamp\"\n- \"createdAt\"\n- \"orderId\"\n- \"category#date\"\n- \"eventDate\"\n- \"userScore\"\n\n**Important notes**\n- The sort key is optional; some DynamoDB tables only have a partition key without a sort key.\n- Queries specifying both partition key and sort key conditions are more efficient and cost-effective.\n- The sort key attribute must be pre-defined in the table schema and cannot be added dynamically.\n- The combination of partition key and sort key must be unique for each item.\n- Sort key values influence the physical storage order of items within a partition, impacting query speed.\n- Using a well-designed sort key can reduce the need for additional indexes and improve scalability.\n\n**Dependency chain**\n- Requires a DynamoDB table configured with a composite primary key (partition key"},"itemDocument":{"type":"string","description":"The `itemDocument` property represents the complete and authoritative data record stored within a DynamoDB table. It is structured as a JSON object that comprehensively includes all attribute names and their corresponding values, fully reflecting the schema and data model of the DynamoDB item. This property encapsulates every attribute of the item, including mandatory primary keys, optional sort keys (if applicable), and any additional data fields, supporting complex nested objects, maps, lists, and arrays to accommodate DynamoDB’s rich data types. It serves as the primary representation of an item for all item-level operations such as reading (GetItem), writing (PutItem), updating (UpdateItem), and deleting (DeleteItem) within DynamoDB, ensuring data integrity and consistency throughout these processes.\n\n**Field behavior**\n- Contains the full and exact representation of a DynamoDB item as a structured JSON document.\n- Includes all required attributes such as primary keys and sort keys, as well as any optional or additional fields.\n- Supports complex nested data structures including maps, lists, and sets, consistent with DynamoDB’s flexible data model.\n- Acts as the main payload for CRUD (Create, Read, Update, Delete) operations on DynamoDB items.\n- Reflects either the current stored state or the intended new state of the item depending on the operation context.\n- Can represent partial documents when used with projection expressions or update operations that modify subsets of attributes.\n\n**Implementation guidance**\n- Ensure strict adherence to DynamoDB’s supported data types (String, Number, Binary, Boolean, Null, List, Map, Set) and attribute naming conventions.\n- Validate presence and correct formatting of primary key attributes to uniquely identify the item for key-based operations.\n- For update operations, provide either the entire item document or only the attributes to be modified, depending on the API method and update strategy.\n- Maintain consistent attribute names and data types across operations to preserve schema integrity and prevent runtime errors.\n- Utilize attribute projections or partial documents when full item data is unnecessary to optimize performance, reduce latency, and minimize cost.\n- Handle nested structures carefully to ensure proper serialization and deserialization between application code and DynamoDB.\n\n**Examples**\n- `{ \"UserId\": \"12345\", \"Name\": \"John Doe\", \"Age\": 30, \"Preferences\": { \"Language\": \"en\", \"Notifications\": true } }`\n- `{ \"OrderId\": \"A100\", \"Date\": \"2024-06-01\", \"Items\":"},"updateExpression":{"type":"string","description":"A string that defines the specific attributes to be updated, added, or removed within a DynamoDB item using DynamoDB's UpdateExpression syntax. This expression provides precise control over how item attributes are modified by including one or more of the following clauses: SET (to assign or overwrite attribute values), REMOVE (to delete attributes), ADD (to increment numeric attributes or add elements to sets), and DELETE (to remove elements from sets). The updateExpression enables atomic, conditional, and efficient updates without replacing the entire item, ensuring data integrity and minimizing write costs. It must be carefully constructed using placeholders for attribute names and values—ExpressionAttributeNames and ExpressionAttributeValues—to avoid conflicts with reserved keywords and prevent injection vulnerabilities. This expression is essential for performing complex update operations in a single request, supporting both simple attribute changes and advanced manipulations like incrementing counters or modifying sets.\n\n**Field behavior**\n- Specifies one or more atomic update operations on a DynamoDB item's attributes.\n- Supports combining multiple update actions (SET, REMOVE, ADD, DELETE) within a single expression, separated by spaces.\n- Requires strict adherence to DynamoDB's UpdateExpression syntax and semantics.\n- Works in conjunction with ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values.\n- Only affects the attributes explicitly mentioned; all other attributes remain unchanged.\n- Enables conditional updates when combined with ConditionExpression for safe concurrent modifications.\n\n**Implementation guidance**\n- Construct the updateExpression using valid DynamoDB syntax, incorporating clauses such as SET, REMOVE, ADD, and DELETE as needed.\n- Always use placeholders (e.g., '#attrName' for attribute names and ':value' for attribute values) to avoid conflicts with reserved words and enhance security.\n- Validate the expression syntax and ensure all referenced placeholders are defined before executing the update operation.\n- Combine multiple update actions by separating them with spaces within the same expression string.\n- Test expressions thoroughly to prevent runtime errors caused by syntax issues or missing placeholders.\n- Use SET for assigning or overwriting attribute values, REMOVE for deleting attributes, ADD for incrementing numbers or adding set elements, and DELETE for removing set elements.\n- Avoid using reserved keywords directly; always substitute them with ExpressionAttributeNames placeholders.\n\n**Examples**\n- \"SET #name = :newName, #age = :newAge REMOVE #obsoleteAttribute\"\n- \"ADD #score :incrementValue DELETE #tags :tagToRemove\"\n- \"SET #status = :statusValue\"\n- \"REMOVE #temporaryField"},"conditionExpression":{"type":"string","description":"A string that defines the conditional logic to be applied to a DynamoDB operation, specifying the precise criteria that must be met for the operation to proceed. This expression leverages DynamoDB's condition expression syntax to evaluate attributes of the targeted item, enabling fine-grained control over operations such as PutItem, UpdateItem, DeleteItem, and transactional writes. It supports a comprehensive set of logical operators (AND, OR, NOT), comparison operators (=, <, >, BETWEEN, IN), functions (attribute_exists, attribute_not_exists, begins_with, contains, size), and attribute path notations to construct complex, nested conditions. This ensures data integrity by preventing unintended modifications and enforcing business rules at the database level.\n\n**Field behavior**\n- Determines whether the DynamoDB operation executes based on the evaluation of the condition against the item's attributes.\n- Evaluated atomically before performing the operation; if the condition evaluates to false, the operation is aborted and no changes are made.\n- Supports combining multiple conditions using logical operators for complex, multi-attribute criteria.\n- Utilizes built-in functions to inspect attribute existence, value patterns, and sizes, enabling sophisticated conditional checks.\n- Applies exclusively to the specific item targeted by the operation, including nested attributes accessed via attribute path notation.\n- Influences transactional operations by ensuring all conditions in a transaction are met before committing.\n\n**Implementation guidance**\n- Always use ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values, avoiding conflicts with reserved keywords and improving readability.\n- Validate the syntax of the condition expression prior to request submission to prevent runtime errors and failed operations.\n- Carefully construct expressions to handle edge cases, such as attributes that may be missing or have null values.\n- Combine multiple conditions logically to enforce precise constraints and business logic on the operation.\n- Test condition expressions thoroughly across diverse data scenarios to ensure expected behavior and robustness.\n- Keep expressions as simple and efficient as possible to optimize performance and reduce request complexity.\n\n**Examples**\n- \"attribute_exists(#name) AND #age >= :minAge\"\n- \"#status = :activeStatus\"\n- \"attribute_not_exists(#id)\"\n- \"#score BETWEEN :low AND :high\"\n- \"begins_with(#title, :prefix)\"\n- \"contains(#tags, :tagValue) OR size(#comments) > :minComments\"\n\n**Important notes**\n- The conditionExpression is optional but highly recommended to safeguard against unintended overwrites, deletions, or updates.\n- If the condition"},"expressionAttributeNames":{"type":"string","description":"expressionAttributeNames is a map of substitution tokens used to safely reference attribute names within DynamoDB expressions. This property allows you to define placeholder tokens for attribute names that might otherwise cause conflicts due to being reserved words, containing special characters, or having names that are dynamically generated. By using these placeholders, you can avoid syntax errors and ambiguities in expressions such as ProjectionExpression, FilterExpression, UpdateExpression, and ConditionExpression, ensuring your queries and updates are both valid and maintainable.\n\n**Field behavior**\n- Functions as a dictionary where each key is a placeholder token starting with a '#' character, and each value is the actual attribute name it represents.\n- Enables the use of reserved words, special characters, or otherwise problematic attribute names within DynamoDB expressions.\n- Supports complex expressions by allowing multiple attribute name substitutions within a single expression.\n- Prevents syntax errors and improves clarity by explicitly mapping placeholders to attribute names.\n- Applies only within the scope of the expression where it is defined, ensuring localized substitution without affecting other parts of the request.\n\n**Implementation guidance**\n- Always prefix keys with a '#' to clearly indicate they are substitution tokens.\n- Ensure each placeholder token is unique within the scope of the expression to avoid conflicts.\n- Use expressionAttributeNames whenever attribute names are reserved words, contain special characters, or are dynamically generated.\n- Combine with expressionAttributeValues when expressions require both attribute name and value substitutions.\n- Verify that the attribute names used as values exactly match those defined in your DynamoDB table schema to prevent runtime errors.\n- Keep the mapping concise and relevant to the attributes used in the expression to maintain readability.\n- Avoid overusing substitution tokens for attribute names that do not require it, to keep expressions straightforward.\n\n**Examples**\n- `{\"#N\": \"Name\", \"#A\": \"Age\"}` to substitute attribute names \"Name\" and \"Age\" in an expression.\n- `{\"#status\": \"Status\"}` when \"Status\" is a reserved word or needs escaping in an expression.\n- `{\"#yr\": \"Year\", \"#mn\": \"Month\"}` for expressions involving multiple date-related attributes.\n- `{\"#addr\": \"Address\", \"#zip\": \"ZipCode\"}` to handle attribute names with special characters or spaces.\n- `{\"#P\": \"Price\", \"#Q\": \"Quantity\"}` used in an UpdateExpression to increment numeric attributes.\n\n**Important notes**\n- Keys must always begin with the '#' character; otherwise, the substitution"},"expressionAttributeValues":{"type":"string","description":"expressionAttributeValues is a map of substitution tokens for attribute values used within DynamoDB expressions. These tokens serve as placeholders that safely inject values into expressions such as ConditionExpression, FilterExpression, UpdateExpression, and KeyConditionExpression. By separating data from code, this mechanism prevents injection attacks and syntax errors, ensuring secure and reliable query and update operations. Each key in this map is a placeholder string that begins with a colon (\":\"), and each corresponding value is a DynamoDB attribute value object representing the actual data to be used in the expression. This design enforces that user input is never directly embedded into expressions, enhancing both security and maintainability.\n\n**Field behavior**\n- Contains key-value pairs where keys are placeholders prefixed with a colon (e.g., \":val1\") used within DynamoDB expressions.\n- Each key maps to a DynamoDB attribute value object specifying the data type and value, supporting all DynamoDB data types including strings, numbers, binaries, lists, maps, sets, and booleans.\n- Enables safe and dynamic substitution of attribute values in expressions, avoiding direct string concatenation and reducing the risk of injection vulnerabilities.\n- Mandatory when expressions reference attribute values to ensure correct parsing and execution of queries or updates.\n- Placeholders are case-sensitive and must exactly match those used in the corresponding expressions.\n- Supports complex data structures, allowing nested maps and lists as attribute values for advanced querying scenarios.\n\n**Implementation guidance**\n- Always prefix placeholder keys with a colon (\":\") to clearly identify them as substitution tokens within expressions.\n- Ensure every placeholder referenced in an expression has a corresponding entry in expressionAttributeValues to avoid runtime errors.\n- Format each value according to DynamoDB’s attribute value specification, such as { \"S\": \"string\" }, { \"N\": \"123\" }, { \"BOOL\": true }, or complex nested structures.\n- Avoid using reserved words, spaces, or invalid characters in placeholder names to prevent syntax errors and conflicts.\n- Validate that the data types of values align with the expected attribute types defined in the table schema to maintain data integrity.\n- When multiple expressions are used simultaneously, maintain unique and consistent placeholder names to prevent collisions and ambiguity.\n- Use expressionAttributeValues in conjunction with expressionAttributeNames when attribute names are reserved words or contain special characters to ensure proper expression parsing.\n- Consider the size limitations of expressionAttributeValues to avoid exceeding DynamoDB’s request size limits.\n\n**Examples**\n- { \":val1\": { \"S\": \""},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from DynamoDB data should be ignored or skipped during the operation, enabling conditional control over data retrieval to optimize performance, reduce latency, or avoid redundant processing in data workflows. This flag allows systems to bypass the extraction step when data is already available, cached, or deemed unnecessary for the current operation, thereby improving efficiency and resource utilization.  \n**Field** BEHAVIOR:  \n- When set to true, the extraction of data from DynamoDB is completely bypassed, preventing any data retrieval from the source.  \n- When set to false or omitted, the extraction process proceeds as normal, retrieving data for further processing.  \n- Primarily used to optimize workflows by skipping unnecessary extraction steps when data is already available or not needed.  \n- Influences the flow of data processing by controlling whether DynamoDB data is fetched or not.  \n- Affects downstream processing by potentially providing no new data input if extraction is skipped.  \n**Implementation guidance:**  \n- Accepts a boolean value: true to skip extraction, false to perform extraction.  \n- Ensure that downstream components can handle scenarios where no data is extracted due to this flag being true.  \n- Validate input to avoid unintended skipping of extraction that could cause data inconsistencies or stale results.  \n- Use this flag judiciously, especially in complex pipelines where data dependencies exist, to prevent breaking data integrity.  \n- Incorporate appropriate logging or monitoring to track when extraction is skipped for auditability and debugging.  \n**Examples:**  \n- ignoreExtract: true — extraction from DynamoDB is skipped, useful when data is cached or pre-fetched.  \n- ignoreExtract: false — extraction occurs normally, retrieving fresh data from DynamoDB.  \n- omit the property entirely — defaults to false, extraction proceeds as usual.  \n**Important notes:**  \n- Skipping extraction may result in incomplete, outdated, or stale data if subsequent operations rely on fresh DynamoDB data.  \n- This option should only be enabled when you are certain that extraction is unnecessary or redundant to avoid data inconsistencies.  \n- Consider the impact on data integrity, consistency, and downstream processing before enabling this flag.  \n- Misuse can lead to errors, unexpected behavior, or data quality issues in data workflows.  \n- Ensure that any caching or alternative data sources used in place of extraction are reliable and up-to-date.  \n**Dependency chain:**  \n- Requires a valid DynamoDB data source configuration to be relevant.  \n-"}}},"Http":{"type":"object","description":"Configuration for HTTP imports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the import to function properly. This is a required configuration\nfor all HTTP based imports, as determined by the connection associated with the import.\n","properties":{"formType":{"type":"string","enum":["assistant","http","rest","graph_ql","assistant_graphql"],"description":"The form type to use for the import.\n- **assistant**: The import is for a system that we have a assistant form for.  This is the default value if there is a connector available for the system.\n- **http**: The import is an HTTP import.\n- **rest**: This value is only used for legacy imports that are not supported by the new HTTP framework.\n- **graph_ql**: The import is a GraphQL import.\n- **assistant_graphql**: The import is for a system that we have a assistant form for and utilizes GraphQL.  This is the default value if there is a connector available for the system and the connector supports GraphQL.\n"},"type":{"type":"string","enum":["file","records"],"description":"Specifies the type of data being imported via HTTP.\n\n- **file**: The import handles raw file content (binary data, PDF, images, etc.)\n- **records**: The import handles structured record data (JSON objects, XML records, etc.) - most common\n\nMost HTTP imports use \"records\" type for standard REST API data imports.\nOnly use \"file\" type when importing raw file content that will be processed downstream.\n"},"requestMediaType":{"type":"string","enum":["xml","json","csv","urlencoded","form-data","octet-stream","plaintext"],"description":"Specifies the media type (content type) of the request body sent to the target API.\n\n- **json**: For JSON request bodies (Content-Type: application/json) - most common\n- **xml**: For XML request bodies (Content-Type: application/xml)\n- **csv**: For CSV data (Content-Type: text/csv)\n- **urlencoded**: For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- **form-data**: For multipart form data\n- **octet-stream**: For binary data\n- **plaintext**: For plain text content\n"},"_httpConnectorEndpointIds":{"type":"array","items":{"type":"string","format":"objectId"},"description":"Array of HTTP connector endpoint IDs to use for this import.\nMultiple endpoints can be specified for different request types or operations.\n"},"blobFormat":{"type":"string","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"],"description":"Character encoding format for blob/binary data imports.\nOnly relevant when type is \"file\" or when handling binary content.\n"},"batchSize":{"type":"integer","description":"Maximum number of records to submit per HTTP request.\n\n- REST services typically use batchSize=1 (one record per request)\n- REST batch endpoints or RPC/XML services can handle multiple records\n- Affects throughput and API rate limiting\n\nDefault varies by API - consult target API documentation for optimal value.\n"},"successMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in successful responses.\n\n- **json**: For JSON responses (most common)\n- **xml**: For XML responses\n- **plaintext**: For plain text responses\n"},"requestType":{"type":"array","items":{"type":"string","enum":["CREATE","UPDATE"]},"description":"Specifies the type of operation(s) this import performs.\n\n- **CREATE**: Creating new records\n- **UPDATE**: Updating existing records\n\nFor composite/upsert imports, specify both operations. The array is positionally\naligned with relativeURI and method arrays — each index maps to one operation.\nThe existingExtract field determines which operation is used at runtime.\n\nExample composite upsert:\n```json\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"method\": [\"PUT\", \"POST\"],\n\"relativeURI\": [\"/customers/{{{data.0.customerId}}}.json\", \"/customers.json\"],\n\"existingExtract\": \"customerId\"\n```\n"},"errorMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in error responses.\n\n- **json**: For JSON error responses (most common)\n- **xml**: For XML error responses\n- **plaintext**: For plain text error messages\n"},"relativeURI":{"type":"array","items":{"type":"string"},"description":"**CRITICAL: This MUST be an array of strings, NOT an object.**\n\n**Handlebars data context:** At runtime, `relativeURI` templates always render against the\n**pre-mapped** record — the original input record that arrives at this import step, before\nthe Import's mapping step is applied. This is intentional: URI construction usually needs\nbusiness identifiers that mappings may rename or remove.\n\nArray of relative URI path(s) for the import endpoint.\nCan include handlebars expressions like `{{field}}` for dynamic values.\n\n**Single-operation imports (one element)**\n```json\n\"relativeURI\": [\"/api/v1/customers\"]\n```\n\n**Composite/upsert imports (multiple elements — positionally aligned)**\nFor imports with both CREATE and UPDATE in requestType, relativeURI, method,\nand requestType arrays are **positionally aligned**. Each index corresponds\nto one operation. The existingExtract field determines which index is used at\nruntime: if the existingExtract field has a value → use the UPDATE index;\nif empty/missing → use the CREATE index.\n\nExample of a composite upsert:\n```json\n\"relativeURI\": [\"/customers/{{{data.0.shopifyCustomerId}}}.json\", \"/customers.json\"],\n\"method\": [\"PUT\", \"POST\"],\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"existingExtract\": \"shopifyCustomerId\"\n```\nHere index 0 is the UPDATE operation (PUT to a specific customer) and index 1 is\nthe CREATE operation (POST to the collection endpoint).\n\n**IMPORTANT**: For composite imports, use the `{{{data.0.fieldName}}}` handlebars\nsyntax (triple braces with data.0 prefix) to reference the existing record ID in\nthe UPDATE URI. Do NOT use `{{#if}}` conditionals in a single URI — use separate\narray elements instead.\n\n**Wrong format (DO not do THIS)**\n```json\n\"relativeURI\": {\"/api/v1/customers\": \"CREATE\"}\n```\n```json\n\"relativeURI\": [\"{{#if id}}/customers/{{id}}.json{{else}}/customers.json{{/if}}\"]\n```\n"},"method":{"type":"array","items":{"type":"string","enum":["GET","PUT","POST","PATCH","DELETE"]},"description":"HTTP method(s) used for import requests.\n\n- **POST**: Most common for creating new records\n- **PUT**: For full updates/replacements\n- **PATCH**: For partial updates\n- **DELETE**: For deletions\n- **GET**: Rarely used for imports\n\nFor composite/upsert imports, specify multiple methods positionally aligned\nwith relativeURI and requestType arrays. Each index maps to one operation.\n\nExample: `[\"PUT\", \"POST\"]` with `requestType: [\"UPDATE\", \"CREATE\"]` means\nindex 0 uses PUT for updates, index 1 uses POST for creates.\n"},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector endpoint being used (single endpoint)."},"body":{"type":"array","items":{"type":"object"},"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP imports.\n\nThis is an OPTIONAL field that should only be set in very specific, rare cases. For standard REST API imports\nthat send JSON data, this field MUST be left undefined because data transformation is handled through the\nImport's mapping step, not through this body template.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data imports, including:\n- REST API imports that send JSON records to create/update resources\n- APIs that accept JSON request bodies (the majority of modern APIs)\n- Any import where you want to use Import mappings to transform data\n- Standard CRUD operations (Create, Update, Delete) via REST APIs\n- GraphQL mutations that accept JSON input\n- Any import where the Import mapping step will handle data transformation\n\nExamples of imports that should have this field undefined:\n- \"Import customer data into Shopify\" → undefined (mappings handle the data transformation)\n- \"Create orders via custom REST API\" → undefined (mappings handle the data transformation)\n\n**KEY CONCEPT:** Imports have a dedicated \"Mapping\" step that transforms source data into the target\nformat. This is the standard, preferred approach. When the body field is set, the mapping step still runs\nfirst — the body template then renders against the **post-mapped** record instead of the destination API\nreceiving the mapped record as-is. This lets you post-process the mapped output into XML, SOAP envelopes,\nor other non-JSON shapes, but forces you to hand-template the entire request and is harder to maintain and debug.\n\n**When to set this field (RARE use CASE)**\n\nSet this field ONLY when:\n1. The target API requires XML or SOAP format (not JSON)\n2. You need a highly customized request structure that cannot be achieved through standard mappings\n3. You're working with a legacy API that requires very specific formatting\n4. The API documentation explicitly shows a complex XML/SOAP envelope structure\n5. When the user explicitly specifies a request body\n\nExamples of when to set body:\n- \"Import to SOAP web service\" → set body with XML/SOAP template\n- \"Send data to XML-only legacy API\" → set body with XML template\n- \"Complex SOAP envelope with multiple nested elements\" → set body with SOAP template\n- \"Create an import to /test with the body as {'key': 'value', 'key2': 'value2', ...}\"\n\n**Implementation details**\n\nWhen this field is undefined (default for most imports):\n- The system uses the Import's mapping step to transform data\n- Mappings provide a visual, intuitive way to map source fields to target fields\n- The mapped data is automatically sent as the request body\n- Easier to debug and maintain\n\nWhen this field is set:\n- The mapping step still runs — it produces a post-mapped record that is then fed into this body template\n- The template renders against the **post-mapped** record (use `{{record.<mappedField>}}` to reference mapping outputs)\n- You are responsible for shaping the final request body (XML/SOAP/custom JSON) by wrapping / restructuring the mapped data\n- Harder to maintain and debug than relying on mappings alone\n\n**Decision flowchart**\n\n1. Does the target API accept JSON request bodies?\n   → YES: Leave this field undefined (use mappings instead)\n2. Are you comfortable using the Import's mapping step?\n   → YES: Leave this field undefined\n3. Does the API require XML or SOAP format?\n   → YES: Set this field with appropriate XML/SOAP template\n4. Does the API require a highly unusual request structure that mappings can't handle?\n   → YES: Consider setting this field (but try mappings first)\n\nRemember: When in doubt, leave this field undefined. Almost all modern REST APIs work perfectly\nwith the standard mapping approach, and this is much easier to use and maintain.\n"},"existingExtract":{"type":"string","description":"Field name or JSON path used to determine if a record already exists in the destination,\nfor composite (upsert) imports that have both CREATE and UPDATE request types.\n\nWhen the import has requestType: [\"UPDATE\", \"CREATE\"] (or [\"CREATE\", \"UPDATE\"]):\n- If the field specified by existingExtract has a value in the incoming record,\n  the UPDATE operation (PUT) is used with the corresponding relativeURI and method.\n- If the field is empty or missing, the CREATE operation (POST) is used.\n\nThis field drives the upsert decision and must match a field that is populated by\nan upstream lookup or response mapping (e.g., a destination system ID like \"shopifyCustomerId\").\n\nIMPORTANT: Only set this field for composite/upsert imports that have BOTH CREATE and UPDATE\nin requestType. Do not set for single-operation imports.\n"},"ignoreExtract":{"type":"string","description":"JSON path to the field in the record that is used to determine if the record already exists.\nOnly used when ignoreMissing or ignoreExisting is set.\n"},"endPointBodyLimit":{"type":"integer","description":"Maximum size limit for the request body in bytes.\nUsed to enforce API-specific size constraints.\n"},"headers":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Custom HTTP headers to include with import requests.\n\nEach header has a name and value, which can include handlebars expressions.\n\nAt runtime, header value templates render against the **pre-mapped**\nrecord (the original input record, before the Import's mapping step).\nUse `{{record.<field>}}` to reference fields as they appear in the\nupstream source — mappings don't rename or drop fields for header\nevaluation.\n"},"response":{"type":"object","properties":{"resourcePath":{"type":"array","items":{"type":"string"},"description":"JSON path to the resource collection in the response.\nRequired when batchSize > 1.\n"},"resourceIdPath":{"type":"array","items":{"type":"string"},"description":"JSON path to the ID field in each resource.\nIf not specified, system looks for \"id\" or \"_id\" fields.\n"},"successPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates success.\nUsed for APIs that don't rely solely on HTTP status codes.\n"},"successValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at successPath that indicate success.\n"},"failPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates failure (even if status=200).\n"},"failValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at failPath that indicate failure.\n"},"errorPath":{"type":"string","description":"JSON path to error message in the response.\n"},"allowArrayforSuccessPath":{"type":"boolean","description":"Whether to allow array values for successPath evaluation.\n"},"hasHeader":{"type":"boolean","description":"Indicates if the first record in the response is a header row (for CSV responses).\n"}},"description":"Configuration for parsing and interpreting HTTP responses.\n"},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource for handling asynchronous import operations.\n\nUsed when the target API uses a \"fire-and-check-back\" pattern (HTTP 202, polling, etc.).\n"},"ignoreLookupName":{"type":"string","description":"Name of the lookup to use for checking resource existence.\nOnly used when ignoreMissing or ignoreExisting is set.\n"}}},"Ftp":{"type":"object","description":"Configuration for Ftp exports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Identifier for the third-party connector utilized in the FTP integration, serving as a unique and immutable reference that links the FTP configuration to an external connector service. This identifier is essential for accurately associating the FTP connection with the intended third-party system, enabling seamless and secure data transfer, synchronization, and management through the specified connector. It ensures that all FTP operations are correctly routed and managed via the external service, supporting reliable integration workflows and maintaining system integrity throughout the connection lifecycle.\n\n**Field behavior**\n- Uniquely identifies the third-party connector associated with the FTP configuration.\n- Acts as a key reference to bind FTP settings to a specific external connector service.\n- Required when creating, updating, or managing FTP connections via third-party integrations.\n- Typically immutable after initial assignment to preserve connection consistency.\n- Used by system components to route FTP operations through the correct external connector.\n\n**Implementation guidance**\n- Should be a string or numeric identifier that aligns with the third-party connector’s identification scheme.\n- Must be validated against the system’s registry of available connectors to ensure validity.\n- Should be set securely and protected from unauthorized modification.\n- Updates to this identifier should be handled cautiously, with appropriate validation and impact assessment.\n- Ensure the identifier complies with any formatting or naming conventions imposed by the third-party service.\n- Transmit and store this identifier securely, using encryption where applicable.\n\n**Examples**\n- \"connector_12345\"\n- \"tpConnector-67890\"\n- \"ftpThirdPartyConnector01\"\n- \"extConnector_abcde\"\n- \"thirdPartyFTP_001\"\n\n**Important notes**\n- This field is critical for correctly mapping FTP configurations to their corresponding third-party services.\n- An incorrect or missing _tpConnectorId can lead to connection failures, data misrouting, or integration errors.\n- Synchronization between this identifier and the third-party connector records must be maintained to avoid inconsistencies.\n- Changes to this field may require re-authentication or reconfiguration of the FTP integration.\n- Proper error handling should be implemented to manage cases where the identifier does not match any registered connector.\n\n**Dependency chain**\n- Depends on the existence and registration of third-party connectors within the system.\n- Referenced by FTP connection management modules, authentication services, and integration workflows.\n- Interacts with authorization mechanisms to ensure secure access to third-party services.\n- May influence logging, monitoring, and auditing processes related to FTP integrations.\n\n**Technical details**\n- Data type: string (or"},"directoryPath":{"type":"string","description":"The path to the directory on the FTP server where files will be accessed, stored, or managed. This path can be specified either as an absolute path starting from the root directory of the FTP server or as a relative path based on the user's initial directory upon login, depending on the server's configuration and permissions. It is essential that the path adheres to the FTP server's directory structure, naming conventions, and access controls to ensure successful file operations such as uploading, downloading, listing, or deleting files. The directoryPath serves as the primary reference point for all file-related commands during an FTP session, determining the scope and context of file management activities.\n\n**Field behavior**\n- Defines the target directory location on the FTP server for all file-related operations.\n- Supports both absolute and relative path formats, interpreted according to the FTP server’s setup.\n- Must comply with the server’s directory hierarchy, naming rules, and access permissions.\n- Used by the FTP client to navigate and perform operations within the specified directory.\n- Influences the scope of accessible files and subdirectories during FTP sessions.\n- Changes to this path affect subsequent file operations until updated or reset.\n\n**Implementation guidance**\n- Validate the directory path format to ensure compatibility with the FTP server, typically using forward slashes (`/`) as separators.\n- Normalize the path to remove redundant elements such as `.` (current directory) and `..` (parent directory) to prevent errors and security vulnerabilities.\n- Handle scenarios where the specified directory does not exist by either creating it if permissions allow or returning a clear, descriptive error message.\n- Properly encode special characters and escape sequences in the path to conform with FTP protocol requirements and server expectations.\n- Provide informative and user-friendly error messages when the path is invalid, inaccessible, or lacks sufficient permissions.\n- Sanitize input to prevent directory traversal attacks or unauthorized access to restricted areas.\n- Consider server-specific behaviors such as case sensitivity, symbolic links, and virtual directories when interpreting the path.\n- Ensure that path changes are atomic and consistent to avoid race conditions during concurrent FTP operations.\n\n**Examples**\n- `/uploads/images`\n- `documents/reports/2024`\n- `/home/user/data`\n- `./backup`\n- `../shared/resources`\n- `/var/www/html`\n- `projects/current`\n\n**Important notes**\n- Access to the specified directory depends on the credentials used during FTP authentication.\n- Directory permissions directly affect the ability to read, write, or list files within the"},"fileName":{"type":"string","description":"The name of the file to be accessed, transferred, or manipulated via FTP (File Transfer Protocol). This property specifies the exact filename, including its extension, uniquely identifying the file within the FTP directory. It is critical for accurately targeting the file during operations such as upload, download, rename, or delete. The filename must be precise and comply with the naming conventions and restrictions of the FTP server’s underlying file system to avoid errors. Case sensitivity depends on the server’s operating system, and special attention should be given to character encoding, especially when handling non-ASCII or international characters. The filename should not include any directory path separators, as these are managed separately in directory or path properties.\n\n**Field behavior**\n- Identifies the specific file within the FTP server directory for various file operations.\n- Must include the file extension to ensure precise identification.\n- Case sensitivity is determined by the FTP server’s operating system.\n- Excludes directory or path information; only the filename itself is specified.\n- Used in conjunction with directory or path properties to locate the full file path.\n\n**Implementation guidance**\n- Validate that the filename contains only characters allowed by the FTP server’s file system.\n- Ensure the filename is a non-empty, well-formed string without path separators.\n- Normalize case if required to match server-specific case sensitivity rules.\n- Properly handle encoding for special or international characters to prevent transfer errors.\n- Combine with directory or path properties to construct the complete file location.\n- Avoid embedding directory paths or separators within the filename property.\n\n**Default behavior**\n- When the user does not specify a file naming convention, default to timestamped filenames using {{timestamp}} (e.g., \"items-{{timestamp}}.csv\"). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Examples**\n- \"report.pdf\"\n- \"image_2024.png\"\n- \"backup_2023_12_31.zip\"\n- \"data.csv\"\n- \"items-{{timestamp}}.csv\"\n\n**Important notes**\n- The filename alone may not be sufficient to locate the file without the corresponding directory or path.\n- Different FTP servers and operating systems may have varying rules regarding case sensitivity.\n- Path separators such as \"/\" or \"\\\" must not be included in the filename.\n- Adherence to FTP server naming conventions and restrictions is essential to prevent errors.\n- Be mindful of potential filename length limits imposed by the FTP server or protocol.\n\n**Dependency chain**\n- Depends on FTP directory or path properties to specify the full file location.\n- Utilized alongside FTP server credentials and connection settings.\n- May be influenced by file transfer mode (binary or ASCII) based on the file type.\n\n**Technical details**\n- Data type: string\n- Must conform to the FTP server’s"},"inProgressFileName":{"type":"string","description":"inProgressFileName is the designated temporary filename assigned to a file during the FTP upload process to clearly indicate that the file transfer is currently underway and not yet complete. This naming convention serves as a safeguard to prevent other systems or processes from prematurely accessing, processing, or locking the file before the upload finishes successfully. Typically, once the upload is finalized, the file is renamed to its intended permanent filename, signaling that it is ready for further use or processing. Utilizing an in-progress filename helps maintain data integrity, avoid conflicts, and streamline workflow automation in environments where files are continuously transferred and monitored. It plays a critical role in managing file state visibility, ensuring that incomplete or partially transferred files are not mistakenly processed, which could lead to data corruption or inconsistent system behavior.\n\n**Field behavior**\n- Acts as a temporary marker for files actively being uploaded via FTP or similar protocols.\n- Prevents other processes, systems, or automated workflows from accessing or processing the file until the upload is fully complete.\n- The file is renamed atomically from the inProgressFileName to the final intended filename upon successful upload completion.\n- Helps manage concurrency, avoid file corruption, partial reads, or race conditions during file transfer.\n- Indicates the current state of the file within the transfer lifecycle, providing clear visibility into upload progress.\n- Supports cleanup mechanisms by identifying orphaned or stale temporary files resulting from interrupted uploads.\n\n**Implementation guidance**\n- Choose a unique, consistent, and easily identifiable naming pattern distinct from final filenames to clearly denote the in-progress status.\n- Commonly use suffixes or prefixes such as \".inprogress\", \".tmp\", \"_uploading\", or similar conventions that are recognized across systems.\n- Ensure the FTP server and client support atomic renaming operations to avoid partial file visibility or race conditions.\n- Verify that the chosen temporary filename does not collide with existing files in the target directory to prevent overwriting or confusion.\n- Maintain consistent naming conventions across all related systems, scripts, and automation tools for clarity and maintainability.\n- Implement robust cleanup routines to detect and remove orphaned or stale in-progress files from failed or abandoned uploads.\n- Coordinate synchronization between upload completion and renaming to prevent premature processing or file locking issues.\n\n**Examples**\n- \"datafile.csv.inprogress\"\n- \"upload.tmp\"\n- \"report_20240615.tmp\"\n- \"file_upload_inprogress\"\n- \"transaction_12345.uploading\"\n\n**Important notes**\n- This filename is strictly temporary and should never be"},"backupDirectoryPath":{"type":"string","description":"backupDirectoryPath specifies the file system path to the directory where backup files are stored during FTP operations. This directory serves as the designated location for saving copies of files before they are overwritten or deleted, ensuring that original data can be recovered if necessary. The path can be either absolute or relative, depending on the system context, and must conform to the operating system’s file path conventions. It is essential that this path is valid, accessible, and writable by the FTP service or application performing the backups to guarantee reliable backup creation and data integrity. Proper configuration of this path is critical to prevent data loss and to maintain a secure and organized backup environment.  \n**Field behavior:**  \n- Specifies the target directory for storing backup copies of files involved in FTP transactions.  \n- Activated only when backup functionality is enabled within the FTP process.  \n- Requires the path to be valid, accessible, and writable to ensure successful backup creation.  \n- Supports both absolute and relative paths, with relative paths resolved against a defined base directory.  \n- If unset or empty, backup operations may be disabled or fallback to a default directory if configured.  \n**Implementation guidance:**  \n- Validate the existence of the directory and verify write permissions before initiating backups.  \n- Normalize paths to handle trailing slashes, redundant separators, and platform-specific path formats.  \n- Provide clear and actionable error messages if the directory is invalid, inaccessible, or lacks necessary permissions.  \n- Support environment variables or placeholders for dynamic path resolution where applicable.  \n- Enforce security best practices by restricting access to the backup directory to authorized users or processes only.  \n- Ensure sufficient disk space is available in the backup directory to accommodate backup files without interruption.  \n- Implement concurrency controls such as file locking or synchronization if multiple backup operations may occur simultaneously.  \n**Examples:**  \n- \"/var/ftp/backups\"  \n- \"C:\\\\ftp\\\\backup\"  \n- \"./backup_files\"  \n- \"/home/user/ftp_backup\"  \n**Important notes:**  \n- The backup directory must have adequate storage capacity to prevent backup failures due to insufficient space.  \n- Permissions must allow the FTP process or service to create and modify files within the directory.  \n- Backup files may contain sensitive or confidential data; appropriate security measures should be enforced to protect them.  \n- Path formatting should align with the conventions of the underlying operating system to avoid errors.  \n- Failure to properly configure this path can result in loss of backup data or inability to recover"}}},"Jdbc":{"type":"object","description":"Configuration for JDBC import operations. Defines how data is written to a database\nvia a JDBC connection.\n\n**Query type determines which fields are required**\n\n| queryType        | Required fields              | Do NOT set        |\n|------------------|------------------------------|-------------------|\n| [\"per_record\"]   | query (array of SQL strings) | bulkInsert        |\n| [\"per_page\"]     | query (array of SQL strings) | bulkInsert        |\n| [\"bulk_insert\"]  | bulkInsert object            | query             |\n| [\"bulk_load\"]    | bulkLoad object              | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]\nWrong: \"INSERT INTO users ...\"\nWrong: [{\"query\": \"INSERT INTO users ...\"}]\n","properties":{"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"] or [\"per_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars {{fieldName}} syntax to inject values from incoming records.\n\n**Format — array of strings**\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n\n**Examples**\n- INSERT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{qty}} WHERE sku = '{{sku}}'\"]\n- MERGE/UPSERT: [\"MERGE INTO target USING (SELECT CAST(? AS VARCHAR) AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = ? WHEN NOT MATCHED THEN INSERT (name, email) VALUES (?, ?)\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{name}}'\n- Numbers: no quotes — {{quantity}}\n"},"queryType":{"type":"array","items":{"type":"string","enum":["bulk_insert","per_record","per_page","bulk_load"]},"description":"Execution strategy for the SQL operation. REQUIRED. Must be an array with one value.\n\n**Decision tree**\n\n1. If UPDATE or UPSERT/MERGE → [\"per_record\"] (set query field)\n2. If INSERT with \"ignore existing\" / \"skip duplicates\" / match logic → [\"per_record\"] (set query field)\n3. If pure INSERT with no duplicate checking → [\"bulk_insert\"] (set bulkInsert object)\n4. If high-volume bulk load → [\"bulk_load\"] (set bulkLoad object)\n\n**Critical relationship to other fields**\n| queryType        | REQUIRES              | DO NOT SET   |\n|------------------|-----------------------|--------------|\n| [\"per_record\"]   | query (array)         | bulkInsert   |\n| [\"per_page\"]     | query (array)         | bulkInsert   |\n| [\"bulk_insert\"]  | bulkInsert object     | query        |\n| [\"bulk_load\"]    | bulkLoad object       | query        |\n\n**Examples**\n- Per-record upsert: [\"per_record\"]\n- Bulk insert: [\"bulk_insert\"]\n- Bulk load: [\"bulk_load\"]\n"},"bulkInsert":{"type":"object","description":"Bulk insert configuration. REQUIRED when queryType is [\"bulk_insert\"]. DO NOT SET when queryType is [\"per_record\"].\n\nEnables efficient batch insertion of records into a database table without per-record SQL.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk insert. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"batchSize":{"type":"string","description":"Number of records per batch during bulk insert.\nLarger values improve throughput but use more memory.\nCommon values: \"1000\", \"5000\".\n"}}},"bulkLoad":{"type":"object","description":"Bulk load configuration. REQUIRED when queryType is [\"bulk_load\"]. Uses database-native bulk loading for maximum throughput.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk load. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"primaryKeys":{"type":["array","null"],"items":{"type":"string"},"description":"Primary key column names for upsert/merge during bulk load.\nWhen set, existing rows matching these keys are updated; non-matching rows are inserted.\nExample: [\"id\"] or [\"order_id\", \"product_id\"] for composite keys.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Lookup name, referenced in field mappings or ignore logic."},"query":{"type":"string","description":"SQL query for the lookup (e.g., \"SELECT id FROM users WHERE email = '{{email}}'\")."},"extract":{"type":"string","description":"Path in the lookup result to use as the value (e.g., \"id\")."},"_lookupCacheId":{"type":"string","format":"objectId","description":"Reference to a cached lookup for performance optimization."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup queries executed against the JDBC connection to resolve reference values.\nEach lookup runs a SQL query and extracts a value to use in field mappings.\n"}}},"Mongodb":{"type":"object","description":"Configuration for MongoDB imports. This schema defines ONLY the MongoDB-specific configuration properties.\n\n**IMPORTANT:** Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level. When generating this configuration, return ONLY the properties defined here (method, collection, ignoreExtract, etc.).","properties":{"method":{"type":"string","enum":["insertMany","updateOne"],"description":"The HTTP method to be used for the MongoDB API request, specifying the type of operation to perform on the database resource. This property dictates how the server interprets and processes the request, directly influencing the action taken on MongoDB collections or documents. Common HTTP methods include GET for retrieving data, POST for creating new entries, PUT or PATCH for updating existing documents, and DELETE for removing records. Each method determines the structure of the request payload, the expected response format, and the side effects on the database. Proper selection of the method ensures alignment with RESTful principles, maintains idempotency where applicable, and supports secure and predictable interactions with the MongoDB service.\n\n**Field behavior**\n- Defines the intended CRUD operation (Create, Read, Update, Delete) on MongoDB resources.\n- Controls server-side processing logic, affecting data retrieval, insertion, modification, or deletion.\n- Influences the presence, format, and content of request bodies and the nature of responses returned.\n- Determines idempotency and safety characteristics of the API call, impacting retry and error handling strategies.\n- Affects authorization and authentication requirements based on the sensitivity of the operation.\n- Impacts caching behavior and network efficiency depending on the method used.\n\n**Implementation guidance**\n- Use standard HTTP methods consistent with RESTful API design: GET for reading data, POST for creating new documents, PUT or PATCH for updating existing documents, and DELETE for removing documents.\n- Validate the method against a predefined set of allowed HTTP methods to ensure API consistency and prevent unsupported operations.\n- Ensure the selected method accurately reflects the intended database operation to avoid unintended data modifications or errors.\n- Consider the idempotency of each method when designing client-side logic, especially for update and delete operations.\n- Include appropriate headers, query parameters, and request body content as required by the specific method and MongoDB operation.\n- Implement robust error handling and validation to manage unsupported or incorrect method usage gracefully.\n- Document method-specific requirements and constraints clearly for API consumers.\n\n**Examples**\n- GET: Retrieve one or more documents from a MongoDB collection without modifying data.\n- POST: Insert a new document into a collection, creating a new resource.\n- PUT: Replace an entire existing document with a new version, ensuring full update.\n- PATCH: Apply partial updates to specific fields within an existing document.\n- DELETE: Remove a document or multiple documents from a collection permanently.\n\n**Important notes**\n- The HTTP method must be compatible with the MongoDB operation"},"collection":{"type":"string","description":"Specifies the exact name of the MongoDB collection where data operations—including insertion, querying, updating, or deletion—will be executed. This property defines the precise target collection within the database, establishing the scope and context of the data affected by the API call. The collection name must strictly adhere to MongoDB's naming conventions to ensure compatibility, prevent conflicts with system-reserved collections, and maintain database integrity. Proper naming facilitates efficient data organization, indexing, query performance, and overall database scalability.\n\n**Field behavior**\n- Identifies the specific MongoDB collection for all CRUD (Create, Read, Update, Delete) operations.\n- Directly determines which subset of data is accessed, modified, or deleted during the API request.\n- Must be a valid, non-empty UTF-8 string that complies with MongoDB collection naming rules.\n- Case-sensitive, meaning collections named \"Users\" and \"users\" are distinct and separate.\n- Influences the application’s data flow and operational context by specifying the data container.\n- Changes to this property dynamically alter the target dataset for the API operation.\n\n**Implementation guidance**\n- Validate that the collection name is a non-empty UTF-8 string without null characters, spaces, or invalid symbols such as '$' or '.' except where allowed.\n- Ensure the name does not start with the reserved prefix \"system.\" to avoid conflicts with MongoDB internal collections.\n- Adopt consistent, descriptive, and meaningful naming conventions to enhance database organization, maintainability, and clarity.\n- Verify the existence of the collection before performing operations; implement robust error handling for cases where the collection does not exist.\n- Consider how the collection name impacts indexing strategies, query optimization, sharding, and overall database performance.\n- Maintain consistency in collection naming across the application to prevent unexpected behavior or data access issues.\n- Avoid overly long or complex names to reduce potential errors and improve readability.\n\n**Examples**\n- \"users\"\n- \"orders\"\n- \"inventory_items\"\n- \"logs_2024\"\n- \"customer_feedback\"\n- \"session_data\"\n- \"product_catalog\"\n\n**Important notes**\n- Collection names are case-sensitive and must be used consistently throughout the application to avoid data inconsistencies.\n- Avoid reserved names and prefixes such as \"system.\" to prevent interference with MongoDB’s internal collections and operations.\n- The choice of collection name can significantly affect database performance, indexing efficiency, and scalability.\n- Renaming a collection requires updating all references in the application and may necessitate data"},"filter":{"type":"string","description":"A MongoDB query filter object used to specify detailed criteria for selecting documents within a collection. This filter defines the exact conditions that documents must satisfy to be included in the query results, leveraging MongoDB's comprehensive and expressive query syntax. It supports a wide range of operators and expressions to enable precise and flexible querying of documents based on field values, existence, data types, ranges, nested properties, array contents, and more. The filter allows for complex logical combinations and can target deeply nested fields using dot notation, making it a powerful tool for fine-grained data retrieval.\n\n**Field behavior**\n- Determines which documents in the MongoDB collection are matched and returned by the query.\n- Supports complex query expressions including comparison operators (`$eq`, `$gt`, `$lt`, `$gte`, `$lte`, `$ne`), logical operators (`$and`, `$or`, `$not`, `$nor`), element operators (`$exists`, `$type`), array operators (`$in`, `$nin`, `$all`, `$elemMatch`), and evaluation operators (`$regex`, `$expr`).\n- Enables filtering based on simple fields, nested document properties using dot notation, and array contents.\n- Supports querying for null values, missing fields, and specific data types.\n- Allows combining multiple conditions using logical operators to form complex queries.\n- If omitted or provided as an empty object, the query defaults to returning all documents in the collection without filtering.\n- Can be used to optimize data retrieval by narrowing down the result set to only relevant documents.\n\n**Implementation guidance**\n- Construct the filter using valid MongoDB query operators and syntax to ensure correct behavior.\n- Validate and sanitize all input used to build the filter to prevent injection attacks or malformed queries.\n- Utilize dot notation to query nested fields within embedded documents or arrays.\n- Optimize query performance by aligning filters with indexed fields in the MongoDB collection.\n- Handle edge cases such as null values, missing fields, and data type variations appropriately within the filter.\n- Test filters thoroughly to ensure they return expected results and handle all relevant data scenarios.\n- Consider the impact of filter complexity on query execution time and resource usage.\n- Use projection and indexing strategies in conjunction with filters to improve query efficiency.\n- Be mindful of MongoDB version compatibility when using advanced operators or expressions.\n\n**Examples**\n- `{ \"status\": \"active\" }` — selects documents where the `status` field exactly matches \"active\".\n- `{ \"age\": { \"$gte\":"},"document":{"type":"string","description":"The main content or data structure representing a single record in a MongoDB database, encapsulated as a JSON-like object known as a document. This document consists of key-value pairs where keys are field names and values can be of various data types, including nested documents, arrays, strings, numbers, booleans, dates, and binary data, allowing for flexible and hierarchical data representation. It serves as the fundamental unit of data storage and retrieval within MongoDB collections and typically includes a unique identifier field (_id) that ensures each document can be distinctly accessed. Documents can contain metadata, support complex nested structures, and vary in schema within the same collection, reflecting MongoDB's schema-less design. This flexibility enables dynamic and evolving data models suited for diverse application needs, supporting efficient querying, indexing, and atomic operations at the document level.\n\n**Field behavior**\n- Represents an individual MongoDB document stored within a collection.\n- Contains key-value pairs with field names as keys and diverse data types as values, including nested documents and arrays.\n- Acts as the primary unit for data storage, retrieval, and manipulation in MongoDB.\n- Includes a mandatory unique identifier field (_id) unless auto-generated by MongoDB.\n- Supports flexible schema, allowing documents within the same collection to have different structures.\n- Can include metadata fields and support indexing for optimized queries.\n- Allows for embedding related data directly within the document to reduce the need for joins.\n- Supports atomic operations at the document level, ensuring consistency during updates.\n\n**Implementation guidance**\n- Ensure compliance with MongoDB BSON format constraints for data types and structure.\n- Validate field names to exclude reserved characters such as '.' and '$' unless explicitly permitted.\n- Support serialization and deserialization processes between application-level objects and MongoDB documents.\n- Enable handling of nested documents and arrays to represent complex data models effectively.\n- Consider indexing frequently queried fields within the document to enhance performance.\n- Monitor document size to stay within MongoDB’s 16MB limit to avoid performance degradation.\n- Use appropriate data types to optimize storage and query efficiency.\n- Implement validation rules or schema enforcement at the application or database level if needed to maintain data integrity.\n\n**Examples**\n- { \"_id\": ObjectId(\"507f1f77bcf86cd799439011\"), \"name\": \"Alice\", \"age\": 30, \"address\": { \"street\": \"123 Main St\", \"city\": \"Metropolis\" } }\n- { \"productId\": \""},"update":{"type":"string","description":"Update operation details specifying the precise modifications to be applied to one or more documents within a MongoDB collection. This field defines the exact changes using MongoDB's rich set of update operators, enabling atomic and efficient modifications to fields, arrays, or embedded documents without replacing entire documents unless explicitly intended. It supports a comprehensive range of update operations such as setting new values, incrementing numeric fields, removing fields, appending or removing elements in arrays, renaming fields, and more, providing fine-grained control over document mutations. Additionally, it accommodates advanced update mechanisms including pipeline-style updates introduced in MongoDB 4.2+, allowing for complex transformations and conditional updates within a single operation. The update specification can also leverage array filters and positional operators to target specific elements within arrays, enhancing the precision and flexibility of updates.\n\n**Field behavior**\n- Specifies the criteria and detailed modifications to apply to matching documents in a collection.\n- Supports a wide array of MongoDB update operators like `$set`, `$unset`, `$inc`, `$push`, `$pull`, `$addToSet`, `$rename`, `$mul`, `$min`, `$max`, and others.\n- Can target single or multiple documents depending on the operation context and options such as `multi` or `upsert`.\n- Ensures atomicity of update operations to maintain data integrity and consistency across concurrent operations.\n- Allows both partial updates using operators and full document replacements when the update object is a complete document without operators.\n- Supports pipeline-style updates for complex, multi-stage transformations and conditional logic within updates.\n- Enables the use of array filters and positional operators to selectively update elements within arrays based on specified conditions.\n- Handles upsert behavior to insert new documents if no existing documents match the update criteria, when enabled.\n\n**Implementation guidance**\n- Validate the update object rigorously to ensure compliance with MongoDB update syntax and semantics, preventing runtime errors.\n- Favor using update operators for partial updates to optimize performance and minimize unintended data overwrites.\n- Implement graceful handling for cases where no documents match the update criteria, optionally supporting upsert behavior to insert new documents if none exist.\n- Explicitly control whether updates affect single or multiple documents to avoid accidental mass modifications.\n- Incorporate robust error handling for invalid update operations, schema conflicts, or violations of database constraints.\n- Consider the impact of update operations on indexes, triggers, and other database mechanisms to maintain overall system performance and data integrity.\n- Support advanced update features such as array filters, positional operators"},"upsert":{"type":"boolean","description":"Upsert is a boolean flag that controls whether a MongoDB update operation should insert a new document if no existing document matches the specified query criteria, or strictly update existing documents only. When set to true, the operation performs an atomic \"update or insert\" action: it updates the matching document if found, or inserts a new document constructed from the update criteria if none exists. This ensures that after the operation, a document matching the criteria will exist in the collection. When set to false, the operation only updates documents that already exist and skips the operation entirely if no match is found, preventing any new documents from being created. This flag is essential for scenarios where maintaining data integrity and avoiding unintended document creation is critical.\n\n**Field behavior**\n- Determines whether the update operation can create a new document when no existing document matches the query.\n- If true, performs an atomic operation that either updates an existing document or inserts a new one.\n- If false, restricts the operation to updating existing documents only, with no insertion.\n- Influences the database state by enabling conditional document creation during update operations.\n- Affects the outcome of update queries by potentially expanding the dataset with new documents.\n\n**Implementation guidance**\n- Use upsert=true when you need to guarantee the existence of a document after the operation, regardless of prior presence.\n- Use upsert=false to restrict changes strictly to existing documents, avoiding unintended document creation.\n- Ensure that when upsert is true, the update document includes all required fields to create a valid new document.\n- Validate that the query filter and update document are logically consistent to prevent unexpected or malformed insertions.\n- Consider the impact on database size, indexing, and performance when enabling upsert, as it may increase document count.\n- Test the behavior in your specific MongoDB driver and version to confirm upsert semantics and compatibility.\n- Be mindful of potential race conditions in concurrent environments that could lead to duplicate documents if not handled properly.\n\n**Examples**\n- `upsert: true` — Update the matching document if found; otherwise, insert a new document based on the update criteria.\n- `upsert: false` — Update only if a matching document exists; skip the operation if no match is found.\n- Using upsert to create a user profile document if it does not exist during an update operation.\n- Employing upsert in configuration management to ensure default settings documents are present and updated as needed.\n\n**Important notes**\n- Upsert operations can increase"},"ignoreExtract":{"type":"string","description":"Enter the path to the field in the source record that should be used to identify existing records. If a value is found for this field, then the source record will be considered an existing record.\n\nThe dynamic field list makes it easy for you to select the field. If a field contains special characters (which may be the case for certain APIs), then the field is enclosed with square brackets [ ], for example, [field-name].\n\n[*] indicates that the specified field is an array, for example, items[*].id. In this case, you should replace * with a number corresponding to the array item. The value for only this array item (not, the entire array) is checked.\n**CRITICAL:** JSON path to the field in the incoming/source record that should be used to determine if the record already exists in the target system.\n\n**This field is ONLY used when `ignoreExisting` is set to true.** Note: `ignoreExisting` is a separate property that is NOT part of this mongodb configuration.\n\nWhen `ignoreExisting: true` is set (at the import level), this field specifies which field from the incoming data should be checked for a value. If that field has a value, the record is considered to already exist and will be skipped.\n\n**When to set this field**\n\n**ALWAYS set this field** when:\n1. The `ignoreExisting` property will be set to `true` (for \"ignore existing\" scenarios)\n2. The user's prompt mentions checking a specific field to identify existing records\n3. The prompt includes phrases like:\n   - \"indicated by the [field] field\"\n   - \"matching the [field] field\"\n   - \"based on [field]\"\n   - \"using the [field] field\"\n   - \"if [field] is populated\"\n   - \"where [field] exists\"\n\n**How to determine the value**\n\n1. **Look for the identifier field in the prompt:**\n   - \"ignoring existing customers indicated by the **id field**\" → `ignoreExtract: \"id\"`\n   - \"skip existing vendors based on the **email field**\" → `ignoreExtract: \"email\"`\n   - \"ignore records where **customerId** is populated\" → `ignoreExtract: \"customerId\"`\n\n2. **Use the field from the incoming/source data** (NOT the target system field):\n   - This is the field path in the data coming INTO the import\n   - Should match a field that will be present in the source records\n\n3. **Apply special character handling:**\n   - If a field contains special characters (hyphens, spaces, etc.), enclose with square brackets: `[field-name]`, `[customer id]`\n   - For array notation, use `[*]` to indicate array items: `items[*].id` (replace `*` with index if needed)\n\n**Field path syntax**\n\n- **Simple field:** `id`, `email`, `customerId`\n- **Nested field:** `customer.id`, `billing.email`\n- **Field with special chars:** `[customer-id]`, `[external_id]`, `[Customer Name]`\n- **Array field:** `items[*].id` (or `items[0].id` for specific index)\n- **Nested in array:** `addresses[*].postalCode`\n\n**Examples**\n\nThese examples show ONLY the mongodb configuration (not the full import structure):\n\n**Example 1: Simple field**\nPrompt: \"Create customers while ignoring existing customers indicated by the id field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"customers\",\n  \"ignoreExtract\": \"id\"\n}\n```\n(Note: `ignoreExisting: true` should also be set, but at a different level - not shown here)\n\n**Example 2: Field with special characters**\nPrompt: \"Import vendors, skip existing ones based on the vendor-code field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"vendors\",\n  \"ignoreExtract\": \"[vendor-code]\"\n}\n```\n\n**Example 3: Nested field**\nPrompt: \"Add accounts, ignore existing where customer.email matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"accounts\",\n  \"ignoreExtract\": \"customer.email\"\n}\n```\n\n**Example 4: No ignoreExisting scenario**\nPrompt: \"Create or update all customer records\"\n```json\n{\n  \"method\": \"updateOne\",\n  \"collection\": \"customers\"\n}\n```\n(ignoreExtract should NOT be set when ignoreExisting is not true)\n\n**Important notes**\n\n- **DO NOT SET** this field if `ignoreExisting` is not true\n- **`ignoreExisting` is NOT part of this mongodb schema** - it belongs at a different level\n- This specifies the SOURCE field, not the target MongoDB field\n- The field path is case-sensitive\n- Must be a valid field that exists in the incoming data\n- Works in conjunction with `ignoreExisting` - both must be set for this feature to work\n- If the specified field has a value in the incoming record, that record is considered existing and will be skipped\n\n**Common patterns**\n\n- \"ignore existing [records] indicated by the **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"skip existing [records] based on **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"if **[field]** is populated\" → Set `ignoreExtract: \"[field]\"`\n- No mention of specific field for identification → May need to use default like \"id\" or \"_id\""},"ignoreLookupFilter":{"type":"string","description":"If you are adding documents to your MongoDB instance and you have the Ignore Existing flag set to true please enter a filter object here to find existing documents in this collection. The value of this field must be a valid JSON string describing a MongoDB filter object in the correct format and with the correct operators. Refer to the MongoDB documentation for the list of valid query operators and the correct filter object syntax.\n**CRITICAL:** A JSON-stringified MongoDB query filter used to search for existing documents in the target collection.\n\nIt defines the criteria to match incoming records against existing MongoDB documents. If the query returns a result, the record is considered \"existing\" and the operation (usually insert) is skipped.\n\n**Critical distinction vs `ignoreExtract`**\n- Use **`ignoreExtract`** when you just want to check if a field exists on the **INCOMING** record (no DB query).\n- Use **`ignoreLookupFilter`** when you need to query the **MONGODB DATABASE** to see if a record exists (e.g., \"check if email exists in DB\").\n\n**When to set this field**\nSet this field when:\n1. Top-level `ignoreExisting` is `true`.\n2. The prompt implies checking the database for duplicates (e.g., \"skip if email already exists\", \"prevent duplicate SKUs\", \"match on email\").\n\n**Format requirements**\n- Must be a valid **JSON string** representing a MongoDB query.\n- Use Handlebars `{{FieldName}}` to reference values from the incoming record.\n- Format: `\"{ \\\"mongoField\\\": \\\"{{incomingField}}\\\" }\"`\n\n**Examples**\n\n**Example 1: Match on a single field**\nPrompt: \"Insert users, skip if email already exists in the database\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"users\",\n  \"ignoreLookupFilter\": \"{\\\"email\\\":\\\"{{email}}\\\"}\"\n}\n```\n\n**Example 2: Match on multiple fields**\nPrompt: \"Add products, ignore if SKU and VerifyID match existing products\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"products\",\n  \"ignoreLookupFilter\": \"{\\\"sku\\\":\\\"{{sku}}\\\", \\\"verify_id\\\":\\\"{{verify_id}}\\\"}\"\n}\n```\n\n**Example 3: Using operators**\nPrompt: \"Import orders, ignore if order_id matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"orders\",\n  \"ignoreLookupFilter\": \"{\\\"order_id\\\": { \\\"$eq\\\": \\\"{{id}}\\\" } }\"\n}\n```\n\n**Important notes**\n- The value MUST be a string (stringify the JSON).\n- Mongo field names (keys) must match the **collection schema**.\n- Handlebars variables (values) must match the **incoming data**."}}},"NetSuite":{"type":"object","description":"Configuration for NetSuite exports","properties":{"lookups":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the category or classification of the lookup being referenced. It defines the nature or kind of data that the lookup represents, enabling the system to handle it appropriately based on its type.\n\n**Field behavior**\n- Determines the category or classification of the lookup.\n- Influences how the lookup data is processed and validated.\n- Helps in filtering or grouping lookups by their type.\n- Typically represented as a string or enumerated value indicating the lookup category.\n\n**Implementation guidance**\n- Define a clear set of allowed values or an enumeration for the type to ensure consistency.\n- Validate the type value against the predefined set during data input or API requests.\n- Use the type property to drive conditional logic or UI rendering related to the lookup.\n- Document all possible type values and their meanings for API consumers.\n\n**Examples**\n- \"country\" — representing a lookup of countries.\n- \"currency\" — representing a lookup of currency codes.\n- \"status\" — representing a lookup of status codes or states.\n- \"category\" — representing a lookup of item categories.\n\n**Important notes**\n- The type value should be consistent across the system to avoid ambiguity.\n- Changing the type of an existing lookup may affect dependent systems or data integrity.\n- The type property is essential for distinguishing between different lookup datasets.\n\n**Dependency chain**\n- May depend on or be referenced by other properties that require lookup data.\n- Used in conjunction with lookup values or keys to provide meaningful data.\n- Can influence validation rules or business logic applied to the lookup.\n\n**Technical details**\n- Typically implemented as a string or an enumerated type in the API schema.\n- Should be indexed or optimized for quick filtering and retrieval.\n- May be case-sensitive or case-insensitive depending on system design.\n- Should be included in API documentation with all valid values listed."},"searchField":{"type":"string","description":"searchField specifies the particular field or attribute within a dataset or index that the search operation should target. This property determines which field the search query will be applied to, enabling focused and efficient retrieval of relevant information.\n\n**Field behavior**\n- Defines the specific field in the data source to be searched.\n- Limits the search scope to the designated field, improving search precision.\n- Can accept field names as strings, corresponding to indexed or searchable attributes.\n- If omitted or null, the search may default to a predefined field or perform a broad search across multiple fields.\n\n**Implementation guidance**\n- Ensure the field name provided matches exactly with the field names defined in the data schema or index.\n- Validate that the field is searchable and supports the type of queries intended.\n- Consider supporting nested or compound field names if the data structure is hierarchical.\n- Provide clear error handling for invalid or unsupported field names.\n- Allow for case sensitivity or insensitivity based on the underlying search engine capabilities.\n\n**Examples**\n- \"title\" — to search within the title field of documents.\n- \"author.name\" — to search within a nested author name field.\n- \"description\" — to search within the description or summary field.\n- \"tags\" — to search within a list or array of tags associated with items.\n\n**Important notes**\n- The effectiveness of the search depends on the indexing and search capabilities of the specified field.\n- Using an incorrect or non-existent field name may result in no search results or errors.\n- Some fields may require special handling, such as tokenization or normalization, to support effective searching.\n- The property is case-sensitive if the underlying system treats field names as such.\n\n**Dependency chain**\n- Dependent on the data schema or index configuration defining searchable fields.\n- May interact with other search parameters such as search query, filters, or sorting.\n- Influences the search engine or database query construction.\n\n**Technical details**\n- Typically represented as a string matching the field identifier in the data source.\n- May support dot notation for nested fields (e.g., \"user.address.city\").\n- Should be sanitized to prevent injection attacks or malformed queries.\n- Used by the search backend to construct field-specific query clauses."},"expression":{"type":"string","description":"Expression used to define the criteria or logic for the lookup operation. This expression typically consists of a string that can include variables, operators, functions, or references to other data elements, enabling dynamic and flexible lookup conditions.\n\n**Field behavior**\n- Specifies the condition or formula that determines how the lookup is performed.\n- Can include variables, constants, operators, and functions to build complex expressions.\n- Evaluated at runtime to filter or select data based on the defined logic.\n- Supports dynamic referencing of other fields or parameters within the lookup context.\n\n**Implementation guidance**\n- Ensure the expression syntax is consistent with the supported expression language or parser.\n- Validate expressions before execution to prevent runtime errors.\n- Support common operators (e.g., ==, !=, >, <, AND, OR) and functions as per the system capabilities.\n- Allow expressions to reference other properties or variables within the lookup scope.\n- Provide clear error messages if the expression is invalid or cannot be evaluated.\n\n**Examples**\n- `\"status == 'active' AND score > 75\"`\n- `\"userRole == 'admin' OR accessLevel >= 5\"`\n- `\"contains(tags, 'urgent')\"`\n- `\"date >= '2024-01-01' AND date <= '2024-12-31'\"`\n\n**Important notes**\n- The expression must be a valid string conforming to the expected syntax.\n- Incorrect or malformed expressions may cause lookup failures or unexpected results.\n- Expressions should be designed to optimize performance, avoiding overly complex or resource-intensive logic.\n- The evaluation context (available variables and functions) should be clearly documented for users.\n\n**Dependency chain**\n- Depends on the lookup context providing necessary variables or data references.\n- May depend on the expression evaluation engine or parser integrated into the system.\n- Influences the outcome of the lookup operation and subsequent data processing.\n\n**Technical details**\n- Typically represented as a string data type.\n- Parsed and evaluated by an expression engine or interpreter at runtime.\n- May support standard expression languages such as SQL-like syntax, JSONPath, or custom domain-specific languages.\n- Should handle escaping and quoting of string literals within the expression."},"resultField":{"type":"string","description":"resultField specifies the name of the field in the lookup result that contains the desired value to be retrieved or used.\n\n**Field behavior**\n- Identifies the specific field within the lookup result data to extract.\n- Determines which piece of information from the lookup response is returned or processed.\n- Must correspond to a valid field name present in the lookup result structure.\n- Used to map or transform lookup data into the expected output format.\n\n**Implementation guidance**\n- Ensure the field name matches exactly with the field in the lookup result, including case sensitivity if applicable.\n- Validate that the specified field exists in all possible lookup responses to avoid runtime errors.\n- Use this property to customize which data from the lookup is utilized downstream.\n- Consider supporting nested field paths if the lookup result is a complex object.\n\n**Examples**\n- \"email\" — to extract the email address field from the lookup result.\n- \"userId\" — to retrieve the user identifier from the lookup data.\n- \"address.city\" — to access a nested city field within an address object in the lookup result.\n\n**Important notes**\n- Incorrect or misspelled field names will result in missing or null values.\n- The field must be present in the lookup response schema; otherwise, the lookup will not yield useful data.\n- This property does not perform any transformation; it only selects the field to be used.\n\n**Dependency chain**\n- Depends on the structure and schema of the lookup result data.\n- Used in conjunction with the lookup operation that fetches the data.\n- May influence downstream processing or mapping logic that consumes the lookup output.\n\n**Technical details**\n- Typically a string representing the key or path to the desired field in the lookup result object.\n- May support dot notation for nested fields if supported by the implementation.\n- Should be validated against the lookup result schema during configuration or runtime."},"includeInactive":{"type":"boolean","description":"IncludeInactive: Specifies whether to include inactive items in the lookup results.\n**Field behavior**\n- When set to true, the lookup results will contain both active and inactive items.\n- When set to false or omitted, only active items will be included in the results.\n- Affects the filtering logic applied during data retrieval for lookups.\n**Implementation guidance**\n- Default the value to false if not explicitly provided to avoid unintended inclusion of inactive items.\n- Ensure that the data source supports filtering by active/inactive status.\n- Validate the input to accept only boolean values.\n- Consider performance implications when including inactive items, especially if the dataset is large.\n**Examples**\n- includeInactive: true — returns all items regardless of their active status.\n- includeInactive: false — returns only items marked as active.\n**Important notes**\n- Including inactive items may expose deprecated or obsolete data; use with caution.\n- The definition of \"inactive\" depends on the underlying data model and should be clearly documented.\n- This property is typically used in administrative or audit contexts where full data visibility is required.\n**Dependency chain**\n- Dependent on the data source’s ability to distinguish active vs. inactive items.\n- May interact with other filtering or pagination parameters in the lookup API.\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or request body field in lookup API calls.\n- Requires backend support to filter data based on active status flags or timestamps."},"_id":{"type":"object","description":"_id: The unique identifier for the lookup entry within the system. This identifier is typically a string or an ObjectId that uniquely distinguishes each lookup record from others in the database or dataset.\n**Field behavior**\n- Serves as the primary key for the lookup entry.\n- Must be unique across all lookup entries.\n- Immutable once assigned; should not be changed to maintain data integrity.\n- Used to reference the lookup entry in queries, updates, and deletions.\n**Implementation guidance**\n- Use a consistent format for the identifier, such as a UUID or database-generated ObjectId.\n- Ensure uniqueness by leveraging database constraints or application-level checks.\n- Assign the _id at the time of creation of the lookup entry.\n- Avoid exposing internal _id values directly to end-users unless necessary.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"lookup_001\"\n**Important notes**\n- The _id field is critical for data integrity and should be handled with care.\n- Changing the _id after creation can lead to broken references and data inconsistencies.\n- In some systems, the _id may be automatically generated by the database.\n**Dependency chain**\n- May be referenced by other fields or entities that link to the lookup entry.\n- Dependent on the database or storage system's method of generating unique identifiers.\n**Technical details**\n- Typically stored as a string or a specialized ObjectId type depending on the database.\n- Indexed to optimize query performance.\n- Often immutable and enforced by database schema constraints."},"default":{"type":["object","null"],"description":"A boolean property indicating whether the lookup is the default selection among multiple lookup options.\n\n**Field behavior**\n- Determines if this lookup is automatically selected or used by default in the absence of user input.\n- Only one lookup should be marked as default within a given context to avoid ambiguity.\n- Influences the initial state or value presented to the user or system when multiple lookups are available.\n\n**Implementation guidance**\n- Set to `true` for the lookup that should be the default choice; all others should be `false` or omitted.\n- Validate that only one lookup per group or context has `default` set to `true`.\n- Use this property to pre-populate fields or guide user selections in UI or processing logic.\n\n**Examples**\n- `default: true` — This lookup is the default selection.\n- `default: false` — This lookup is not the default.\n- Omitted `default` property implies the lookup is not the default.\n\n**Important notes**\n- If multiple lookups have `default` set to `true`, the system behavior may be unpredictable.\n- The absence of a `default` lookup means no automatic selection; user input or additional logic is required.\n- This property is typically optional but recommended for clarity in multi-lookup scenarios.\n\n**Dependency chain**\n- Related to the `lookups` array or collection where multiple lookup options exist.\n- May influence UI components or backend logic that rely on default selections.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default value: Typically `false` if omitted.\n- Should be included at the same level as other lookup properties within the `lookups` object."}},"description":"A comprehensive collection of lookup tables or reference data sets designed to support and enhance the main data model or application logic. These lookup tables provide predefined, authoritative sets of values that can be consistently referenced throughout the system to ensure data uniformity, minimize redundancy, and facilitate robust data validation and user interface consistency. They serve as centralized repositories for static or infrequently changing data that underpin dynamic application behavior, such as dropdown options, status codes, categorizations, and mappings. By centralizing this reference data, the system promotes maintainability, reduces errors, and enables seamless integration across different modules and services. Lookup tables may also support complex data structures, localization, versioning, and hierarchical relationships to accommodate diverse application requirements and evolving business rules.\n\n**Field behavior**\n- Contains multiple distinct lookup tables, each identified by a unique name representing a specific domain or category of reference data.\n- Each lookup table comprises structured entries, which may be simple key-value pairs or complex objects with multiple attributes, defining valid options, mappings, or metadata.\n- Lookup tables standardize inputs, support UI components like dropdowns and filters, enforce data integrity, and drive conditional logic across the application.\n- Typically static or updated infrequently, ensuring stability and consistency in dependent processes.\n- Supports localization or internationalization to provide values in multiple languages or regional formats.\n- May represent hierarchical or relational data structures within lookup tables to capture complex relationships.\n- Enables versioning to track changes over time and facilitate rollback, auditing, or backward compatibility.\n- Acts as a single source of truth for reference data, reducing duplication and discrepancies across systems.\n- Can be extended or customized to meet specific domain or organizational needs without impacting core application logic.\n\n**Implementation guidance**\n- Organize as a dictionary or map where each key corresponds to a lookup table name and the associated value contains the set of entries.\n- Implement efficient loading, caching, and retrieval mechanisms to optimize performance and minimize latency.\n- Provide controlled update or refresh capabilities to allow seamless modifications without disrupting ongoing application operations.\n- Enforce strict validation rules on lookup entries to prevent invalid, inconsistent, or duplicate data.\n- Incorporate support for localization and internationalization where applicable, enabling multi-language or region-specific values.\n- Maintain versioning or metadata to track changes and support backward compatibility.\n- Consider access control policies if lookup data includes sensitive or restricted information.\n- Design APIs or interfaces for easy querying, filtering, and retrieval of lookup data by client applications.\n- Ensure synchronization mechanisms are in place"},"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"Specifies the type of operation to perform on NetSuite records. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create a new record. Use when importing new records only.\n- \"update\" — Update an existing record. Requires internalIdLookup to find the record.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. This is the most common operation.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete a record. Requires internalIdLookup to find the record.\n\n**Implementation guidance**\n- Value is automatically lowercased by the API.\n- For \"addupdate\", \"update\", and \"delete\", you MUST also set internalIdLookup to specify how to find existing records.\n- For \"add\" with ignoreExisting, set internalIdLookup to check for duplicates before creating.\n- Default to \"addupdate\" when the user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when the user says \"create\" or \"insert\" without mentioning updates.\n\n**Examples**\n- \"addupdate\" — for syncing records (create or update)\n- \"add\" — for creating new records only\n- \"update\" — for updating existing records only\n- \"delete\" — for removing records\n\n**Important notes**\n- This field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\n- Do NOT wrap it in an object like {\"type\": \"addupdate\"} — that will cause a validation error."},"isFileProvider":{"type":"boolean","description":"isFileProvider indicates whether the NetSuite integration functions as a file provider, enabling comprehensive capabilities to access, manage, and manipulate files within the NetSuite environment. When enabled, this property grants the integration full file-related operations such as browsing directories, uploading new files, downloading existing files, updating file metadata, and deleting files directly through the integration interface. This functionality is essential for workflows that require seamless, automated interaction with NetSuite's file storage system, facilitating efficient document handling, version control, and integration with external file management processes. It also impacts the availability of specific API endpoints and user interface components related to file management, and may influence system performance and API rate limits due to the volume of file operations.\n\n**Field behavior**\n- When set to true, activates full file management features including browsing, uploading, downloading, updating, and deleting files within NetSuite.\n- When false or omitted, disables all file provider functionalities, preventing any file-related operations through the integration.\n- Acts as a toggle controlling the availability of file-centric capabilities and related API endpoints.\n- Changes typically require re-authentication or reinitialization to apply updated file access permissions correctly.\n- May affect integration performance and API rate limits due to potentially high file operation volumes.\n\n**Implementation guidance**\n- Enable only if direct, comprehensive interaction with NetSuite files is required.\n- Ensure the integration has appropriate NetSuite permissions, roles, and OAuth scopes for secure file access and manipulation.\n- Validate strictly as a boolean to prevent misconfiguration.\n- Assess security implications thoroughly, enforcing strict access controls and audit logging when enabled.\n- Monitor API usage and system performance closely, as file operations can increase load and impact rate limits.\n- Coordinate with NetSuite administrators to align file access policies with organizational security and compliance standards.\n- Consider effects on middleware and downstream systems handling file transfers, error handling, and synchronization.\n\n**Examples**\n- `isFileProvider: true` — Enables full file provider capabilities, allowing browsing, uploading, downloading, and managing files within NetSuite.\n- `isFileProvider: false` — Disables all file-related functionalities, restricting the integration to non-file operations only.\n\n**Important notes**\n- Enabling file provider functionality may require additional NetSuite OAuth scopes, roles, or permissions specifically related to file access.\n- File operations can significantly impact API rate limits and quotas; plan and monitor usage accordingly.\n- This property exclusively controls file management capabilities and does not affect other integration functionalities."},"customFieldMetadata":{"type":"object","description":"Metadata information related to custom fields defined within the NetSuite environment, offering a comprehensive and detailed overview of each custom field's attributes, configuration settings, and operational parameters. This includes critical details such as the field type (e.g., checkbox, select, date, text), label, internal ID, source lists or records for select-type fields, default values, validation rules, display properties, and behavioral flags like mandatory, read-only, or hidden status. The metadata acts as an authoritative reference for accurately managing, validating, and utilizing custom fields during integrations, data processing, and API interactions, ensuring that custom field data is interpreted and handled precisely according to its defined structure, constraints, and business logic. It also enables dynamic adaptation to changes in custom field configurations, supporting robust and flexible integration workflows that maintain data integrity and enforce business rules effectively.\n\n**Field behavior**\n- Provides exhaustive metadata for each custom field, including type, label, internal ID, source references, default values, validation criteria, and display properties.\n- Reflects the current and authoritative configuration and constraints of custom fields within the NetSuite account.\n- Enables dynamic validation, processing, and rendering of custom fields during data import, export, and API operations.\n- Supports understanding of field dependencies, mandatory status, read-only or hidden flags, and other behavioral characteristics.\n- Facilitates enforcement of business rules and data integrity through validation and conditional logic based on metadata.\n\n**Implementation guidance**\n- Ensure this property is consistently populated with accurate, up-to-date, and comprehensive metadata for all relevant custom fields in the integration context.\n- Regularly synchronize and refresh metadata with the NetSuite environment to capture any additions, modifications, or deletions of custom fields.\n- Leverage this metadata to validate data payloads before transmission and to correctly interpret and process received data.\n- Use metadata details to implement field-specific logic, such as enforcing mandatory fields, applying validation rules, handling default values, and respecting read-only or hidden statuses.\n- Consider caching metadata where appropriate to optimize performance, while ensuring mechanisms exist to detect and apply updates promptly.\n\n**Examples**\n- Field type: \"checkbox\", label: \"Is Active\", id: \"custentity_is_active\", mandatory: true\n- Field type: \"select\", label: \"Customer Type\", id: \"custentity_customer_type\", sourceList: \"customerTypes\", readOnly: false\n- Field type: \"date\", label: \"Contract Start Date\", id: \"custentity_contract_start"},"recordType":{"type":"string","description":"recordType specifies the exact type of record within the NetSuite system that the API operation will interact with. This property is critical as it defines the schema, fields, validation rules, workflows, and behaviors applicable to the record being accessed, created, updated, or deleted. By accurately specifying the recordType, the API can correctly interpret the structure of the data payload, enforce appropriate business logic, and route the request to the correct processing module within NetSuite. This ensures precise data manipulation, retrieval, and integration aligned with the intended record context. Proper use of recordType enables seamless interaction with both standard and custom NetSuite records, supporting robust and flexible API operations across diverse business scenarios.\n\n**Field behavior**\n- Defines the specific NetSuite record type for the API request (e.g., customer, salesOrder, invoice).\n- Directly influences validation rules, available fields, permissible operations, and workflows in the request or response.\n- Determines the context, permissions, and business logic applicable to the record.\n- Must be set accurately and consistently to ensure correct processing and avoid errors.\n- Affects how the API interprets, validates, and processes the associated data payload.\n- Supports interaction with both standard and custom record types configured in the NetSuite account.\n- Enables the API to dynamically adapt to different record structures and business processes based on the recordType.\n\n**Implementation guidance**\n- Use the exact NetSuite internal record type identifiers as defined in official NetSuite documentation and account-specific customizations.\n- Validate the recordType value against the list of supported record types before processing the request to prevent errors.\n- Ensure compatibility of the recordType with the specific API endpoint, operation, and NetSuite account configuration.\n- Implement robust error handling for missing, invalid, or unsupported recordType values, providing clear and actionable feedback.\n- Regularly update the recordType list to reflect changes in NetSuite releases, custom record definitions, and account-specific configurations.\n- Consider role-based access and feature enablement that may affect the availability or behavior of certain record types.\n- When dealing with custom records, verify that the recordType matches the custom record’s internal ID and that the API user has appropriate permissions.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"itemFulfillment\"\n- \"vendorBill\"\n- \"purchaseOrder\"\n- \"customRecordType123\" (example of a custom record)\n\n**Important notes**\n- The recordType value is case"},"recordTypeId":{"type":"string","description":"recordTypeId is the unique identifier that specifies the exact type or category of a record within the NetSuite system. It defines the structure, fields, validation rules, workflows, and behavior applicable to the record instance, ensuring that API operations target the correct record schema and processing logic. This identifier is essential for routing requests to the appropriate record handlers, enforcing data integrity, and validating the data according to the specific record type's requirements. Once set for a record instance, the recordTypeId is immutable, as changing it would imply a fundamentally different record type and could lead to inconsistencies or errors in processing.\n\n**Field behavior**\n- Specifies the precise category or type of NetSuite record being accessed or manipulated.\n- Determines the applicable schema, fields, validation rules, workflows, and business logic for the record.\n- Directs API requests to the correct processing logic, endpoint, and validation routines based on record type.\n- Immutable for a given record instance; changing it indicates a different record type and is not permitted.\n- Influences permissions, access control, and integration behaviors tied to the record type.\n- Affects how related records and dependencies are handled within the system.\n\n**Implementation guidance**\n- Use the exact internal string ID or numeric identifier as defined by NetSuite for record types.\n- Validate the recordTypeId against the current list of supported, enabled, and custom NetSuite record types before processing.\n- Ensure consistency and compatibility between recordTypeId and related fields or data structures within the payload.\n- Implement robust error handling for invalid, unsupported, deprecated, or mismatched recordTypeId values.\n- Keep the recordTypeId definitions synchronized with updates, customizations, and configurations from the NetSuite account to avoid mismatches.\n- Consider case sensitivity and formatting rules as dictated by the NetSuite API specifications.\n- When integrating with custom record types, verify that custom IDs conform to naming conventions and are properly registered in the NetSuite environment.\n\n**Examples**\n- \"customer\" — representing a standard customer record type.\n- \"salesOrder\" — representing a sales order record type.\n- \"invoice\" — representing an invoice record type.\n- Numeric identifiers such as \"123\" if the system uses numeric codes for record types.\n- Custom record type IDs defined within a specific NetSuite account, e.g., \"customRecordType_456\".\n- \"employee\" — representing an employee record type.\n- \"purchaseOrder\" — representing a purchase order record type.\n\n**Important notes**"},"retryUpdateAsAdd":{"type":"boolean","description":"A boolean flag that controls whether failed update operations in NetSuite integrations should be automatically retried as add operations. When set to true, if an update attempt fails—commonly because the target record does not exist—the system will attempt to add the record instead. This mechanism helps maintain data synchronization by addressing scenarios where records may have been deleted, never created, or are otherwise missing, thereby reducing manual intervention and minimizing data discrepancies. It ensures smoother integration workflows by providing a fallback strategy specifically for update failures, enhancing resilience and data consistency.\n\n**Field behavior**\n- Governs retry logic exclusively for update operations within NetSuite integrations.\n- When true, triggers an automatic retry as an add operation upon update failure.\n- When false or unset, update failures result in immediate error responses without retries.\n- Facilitates handling of missing or deleted records to improve synchronization accuracy.\n- Does not influence initial add operations or other API call types.\n- Activates retry logic only after detecting a failed update attempt.\n\n**Implementation guidance**\n- Enable this flag when there is a possibility that update requests target non-existent records.\n- Evaluate the potential for duplicate record creation if the add operation is not idempotent.\n- Implement comprehensive error handling and logging to monitor retry attempts and outcomes.\n- Conduct thorough testing in staging or development environments before deploying to production.\n- Coordinate with other retry or error recovery configurations to avoid conflicting behaviors.\n- Ensure accurate detection of update failure causes to apply retries appropriately and avoid unintended consequences.\n\n**Examples**\n- `retryUpdateAsAdd: true` — Automatically retries adding the record if an update fails due to a missing record.\n- `retryUpdateAsAdd: false` — Update failures are reported immediately without retrying as add operations.\n\n**Important notes**\n- Enabling this flag may lead to duplicate records if update failures occur for reasons other than missing records.\n- Use with caution in environments requiring strict data integrity, auditability, and compliance.\n- This flag only modifies retry behavior after an update failure; it does not alter the initial add or update request logic.\n- Accurate failure cause analysis is critical to ensure retries are applied correctly and safely.\n\n**Dependency chain**\n- Relies on execution of update operations against NetSuite records.\n- Works in conjunction with error detection and handling mechanisms to identify failure reasons.\n- Commonly used alongside other retry policies or error recovery settings to enhance integration robustness.\n\n**Technical details**\n- Data type: boolean\n- Default value: false"},"batchSize":{"type":"number","description":"Specifies the number of records to process in a single batch operation when interacting with the NetSuite API, directly influencing the efficiency, performance, and resource utilization of data processing tasks. This parameter controls how many records are sent or retrieved in one API call, balancing throughput with system constraints such as memory usage, network latency, and API rate limits. Proper configuration of batchSize is essential to optimize processing speed while minimizing the risk of errors, timeouts, or throttling by the NetSuite API. Adjusting batchSize allows for fine-tuning of integration workflows to accommodate varying workloads, system capabilities, and API restrictions, ensuring reliable and efficient data synchronization.  \n**Field behavior:**  \n- Determines the count of records included in each batch request to the NetSuite API.  \n- Directly impacts the number of API calls required to complete processing of all records.  \n- Larger batch sizes can improve throughput by reducing the number of calls but may increase memory consumption, processing latency, and risk of timeouts per call.  \n- Smaller batch sizes reduce memory footprint and improve responsiveness but may increase the total number of API calls and associated overhead.  \n- Influences error handling complexity, as failures in larger batches may require more extensive retries or partial processing logic.  \n- Affects how quickly data is processed and synchronized between systems.  \n**Implementation** GUIDANCE:  \n- Select a batch size that balances optimal performance with available system resources, network conditions, and API constraints.  \n- Conduct thorough performance testing with varying batch sizes to identify the most efficient configuration for your specific workload and environment.  \n- Ensure the batch size complies with NetSuite API limits, including maximum records per request and rate limiting policies.  \n- Implement robust error handling to manage partial failures within batches, including retry mechanisms, logging, and potential batch splitting.  \n- Consider the complexity, size, and processing time of individual records when determining batch size, as more complex or larger records may require smaller batches.  \n- Monitor runtime metrics and adjust batchSize dynamically if possible, based on system load, API response times, and error rates.  \n**Examples:**  \n- 100 (process 100 records per batch for balanced throughput and resource use)  \n- 500 (process 500 records per batch to maximize throughput in high-capacity environments)  \n- 50 (smaller batch size suitable for environments with limited memory, stricter API limits, or higher error sensitivity)  \n**Important notes:**  \n- The batchSize setting"},"internalIdLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Extract the relevant internal ID from the provided NetSuite data response to uniquely identify and reference specific records for subsequent API operations. This involves accurately parsing the response data—whether in JSON, XML, or other formats—to isolate the internal ID, which serves as a critical key for actions such as updates, deletions, or detailed queries within NetSuite. The extraction process must be precise, resilient to variations in response structures, and capable of handling nested or complex data formats to ensure reliable downstream processing and prevent errors in record manipulation.  \n**Field behavior:**  \n- Extracts a unique internal ID value from NetSuite API responses or data objects.  \n- Acts as the primary identifier for referencing and manipulating NetSuite records.  \n- Typically invoked after lookup, search, or query operations to obtain the necessary ID for further API calls.  \n- Supports handling of multiple data formats including JSON, XML, and potentially others.  \n- Ensures the extracted ID is valid, correctly formatted, and usable for subsequent operations.  \n- Handles nested or complex response structures to reliably locate the internal ID.  \n- Operates as a read-only field derived exclusively from API response data.  \n\n**Implementation guidance:**  \n- Implement robust and flexible parsing logic tailored to the expected NetSuite response formats.  \n- Validate the extracted internal ID against expected patterns (e.g., numeric strings) to ensure correctness.  \n- Incorporate comprehensive error handling for cases where the internal ID is missing, malformed, or unexpected.  \n- Design the extraction mechanism to be configurable and adaptable to changes in NetSuite API response schemas or record types.  \n- Support asynchronous processing if API responses are received asynchronously or in streaming formats.  \n- Log extraction attempts and failures to facilitate debugging and maintenance.  \n- Regularly update extraction logic to accommodate API version changes or schema updates.  \n\n**Examples:**  \n- Extracting `\"internalId\": \"12345\"` from a JSON response such as `{ \"internalId\": \"12345\", \"name\": \"Sample Record\" }`.  \n- Parsing nested XML elements or attributes to retrieve the internalId value, e.g., `<record internalId=\"12345\">`.  \n- Using the extracted internal ID to perform update, delete, or detailed fetch operations on a NetSuite record.  \n- Handling cases where the internal ID is embedded within deeply nested or complex response objects.  \n\n**Important notes:**  \n- The internal ID uniquely identifies records within NetSuite and is essential for accurate and reliable API"},"searchField":{"type":"string","description":"The specific field within a NetSuite record that serves as the key attribute for performing an internal ID lookup search. This field determines which property of the record will be queried to locate and retrieve the corresponding internal ID, thereby directly influencing the precision, relevance, and efficiency of the lookup operation. Selecting an appropriate searchField is critical for ensuring accurate matches and optimal performance, as it typically should be a unique, indexed, or otherwise optimized attribute within the record schema. The choice of searchField impacts not only the speed of the search but also the uniqueness and reliability of the results returned, making it essential to align with the record type, user permissions, and any customizations present in the NetSuite environment. Proper selection of this field helps avoid ambiguous results and enhances the overall robustness of the lookup process.\n\n**Field behavior**\n- Identifies the exact attribute of the NetSuite record to be searched.\n- Directs the lookup process to compare the provided search value against this field.\n- Affects the accuracy, relevance, and uniqueness of the internal ID retrieved.\n- Commonly corresponds to fields that are indexed or uniquely constrained for efficient querying.\n- Determines the scope and filtering of the search within the record type.\n- Influences how the system handles multiple matches or ambiguous results.\n- Must correspond to a searchable field that the user has permission to access.\n\n**Implementation guidance**\n- Use only valid, supported field names as defined in the NetSuite record schema.\n- Prefer fields that are indexed or optimized for search to enhance lookup speed.\n- Verify the existence and searchability of the field on the specific record type being queried.\n- Choose fields that minimize duplicates to avoid ambiguous or multiple results.\n- Ensure exact case sensitivity and spelling alignment with NetSuite’s API definitions.\n- Test the field’s searchability considering user permissions, record configurations, and any customizations.\n- Avoid fields that are write-only, restricted, or non-searchable due to NetSuite permissions or settings.\n- Consider the impact of custom fields or scripts that may alter standard field behavior.\n- Validate that the field supports the type of search operation intended (e.g., exact match, partial match).\n\n**Examples**\n- \"externalId\" — to locate records by an externally assigned identifier.\n- \"email\" — to find records using an email address attribute.\n- \"name\" — to search by the record’s name or title field.\n- \"tranId\" — to query transaction records by their transaction ID.\n- \"entityId"},"expression":{"type":"string","description":"A string representing a search expression used to filter or query records within the NetSuite system based on specific criteria. This expression defines the precise conditions that records must meet to be included in the search results, enabling targeted and efficient data retrieval. It supports a rich syntax including logical operators (AND, OR, NOT), comparison operators (=, !=, >, <, >=, <=), field references, and literal values. Expressions can be nested to form complex queries tailored to specific business needs, allowing for granular control over the search logic. This property is essential for performing internal lookups by matching records against the defined criteria, ensuring that only relevant records are returned.\n\n**Field behavior**\n- Defines the criteria for searching or filtering records by specifying one or more conditions.\n- Supports logical operators such as AND, OR, and NOT to combine multiple conditions logically.\n- Allows inclusion of field names, comparison operators, and literal values to construct meaningful expressions.\n- Enables nested expressions for advanced querying capabilities.\n- Used internally to perform lookups by matching records against the defined expression.\n- Returns records that satisfy all specified conditions within the expression.\n- Evaluates expressions dynamically at runtime to reflect current data states.\n- Supports referencing related record fields to enable cross-record filtering.\n\n**Implementation guidance**\n- Ensure the expression syntax strictly conforms to NetSuite’s search expression format and supported operators.\n- Validate the expression prior to execution to catch syntax errors and prevent runtime failures.\n- Use parameterized values or safe input handling to mitigate injection risks and enhance security.\n- Support and correctly parse nested expressions to handle complex query scenarios.\n- Implement graceful handling for cases where the expression yields no matching records, avoiding errors.\n- Optimize expressions for performance by limiting complexity and avoiding overly broad criteria.\n- Provide clear error messages or feedback when expressions are invalid or unsupported.\n- Consider caching frequently used expressions to improve lookup efficiency.\n\n**Examples**\n- `\"status = 'Open' AND type = 'Sales Order'\"`\n- `\"createdDate >= '2023-01-01' AND createdDate <= '2023-12-31'\"`\n- `\"customer.internalId = '12345' OR customer.email = 'example@example.com'\"`\n- `\"NOT (status = 'Closed' OR status = 'Cancelled')\"`\n- `\"amount > 1000 AND (priority = 'High' OR priority = 'Urgent')\"`\n- `\"item.category = 'Electronics' AND quantity >= 10\"`\n- `\"lastModifiedDate >"}},"description":"internalIdLookup is a boolean property that specifies whether the lookup operation should utilize the unique internal ID assigned by NetSuite to an entity for record retrieval. When set to true, the system treats the provided identifier strictly as this immutable internal ID, enabling precise and unambiguous access to the exact record. This approach is essential in scenarios demanding high accuracy and reliability, as internal IDs are guaranteed to be unique within the NetSuite environment and remain consistent over time. If set to false or omitted, the lookup defaults to using external identifiers, display names, or other non-internal keys, which may be less precise and could lead to multiple matches or ambiguity in the results.\n\n**Field behavior**\n- Activates strict matching using NetSuite’s unique internal ID when true.\n- Defaults to external or display identifier matching when false or not specified.\n- Changes the lookup logic and matching criteria based on the flag’s value.\n- Ensures efficient, accurate, and unambiguous record retrieval by leveraging stable internal identifiers.\n\n**Implementation guidance**\n- Enable only when the exact internal ID of the target record is known, verified, and appropriate for the lookup.\n- Avoid setting to true if only external or display identifiers are available to prevent failed or incorrect lookups.\n- Confirm that the internal ID corresponds to the correct record type to avoid mismatches or errors.\n- Validate the format and existence of the internal ID before performing the lookup operation.\n- Implement comprehensive error handling to manage cases where the internal ID is invalid, missing, or does not correspond to any record.\n\n**Examples**\n- `internalIdLookup: true` — Retrieve a customer record by its NetSuite internal ID for precise identification.\n- `internalIdLookup: false` — Search for an inventory item using its external SKU or descriptive name.\n- Omitting `internalIdLookup` defaults to false, causing the system to perform lookups based on external or display identifiers.\n\n**Important notes**\n- Internal IDs are stable, unique, and system-generated identifiers within NetSuite, providing the most reliable reference for records.\n- Using internal IDs can improve lookup performance and reduce ambiguity compared to relying on external or display identifiers.\n- Incorrectly setting this flag to true with non-internal IDs will likely result in lookup failures or no matching records found.\n- This property is specific to NetSuite integrations and may not be applicable or recognized in other systems or contexts.\n- Ensure synchronization between the internal ID used and the expected record type to maintain data integrity and"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"ignoreReadOnlyFields specifies whether the system should bypass any attempts to modify fields designated as read-only during data processing or update operations. When set to true, the system silently ignores changes to these protected fields, preventing errors or exceptions that would otherwise occur if such modifications were attempted. This behavior facilitates smoother integration and more resilient data handling, especially in environments where partial updates are common or where read-only constraints are strictly enforced. Conversely, when set to false, the system will attempt to update all fields, including those marked as read-only, which may result in errors or exceptions if such modifications are not permitted.  \n**Field behavior:**  \n- Controls whether read-only fields are excluded from update or modification attempts.  \n- When true, modifications to read-only fields are skipped silently without raising errors or exceptions.  \n- When false, attempts to modify read-only fields may cause errors, exceptions, or transaction failures.  \n- Helps maintain system stability by preventing unauthorized or unsupported changes to protected data.  \n- Influences how partial updates and patch operations are processed in the presence of read-only constraints.  \n- Affects error reporting and feedback mechanisms during data validation and update workflows.  \n\n**Implementation guidance:**  \n- Enable this flag to enhance robustness and fault tolerance when integrating with APIs or systems enforcing read-only field restrictions.  \n- Use in scenarios where it is acceptable or expected that read-only fields remain unchanged during update operations.  \n- Ensure downstream business logic, validation rules, and data consistency checks accommodate the possibility that some fields may not be updated.  \n- Carefully evaluate the impact on data integrity, audit requirements, and compliance before enabling this behavior.  \n- Coordinate with audit, logging, and monitoring mechanisms to accurately capture which fields were updated, skipped, or ignored.  \n- Consider the implications for user feedback and error handling in client applications consuming the API.  \n\n**Examples:**  \n- ignoreReadOnlyFields: true — Update operations will silently skip read-only fields, avoiding errors and allowing partial updates to succeed.  \n- ignoreReadOnlyFields: false — Update operations will attempt to modify all fields, potentially triggering errors or exceptions if read-only fields are included.  \n- In a bulk update scenario, setting this to true prevents the entire batch from failing due to attempts to modify read-only fields.  \n- When performing a patch request, enabling this flag allows the system to apply only permissible changes without interruption.  \n\n**Important notes:**  \n- Setting this flag to true does not grant permission to"},"warningAsError":{"type":"boolean","description":"Indicates whether warnings encountered during processing should be escalated and treated as errors, causing the operation to fail immediately upon any warning detection. This setting enforces stricter validation and operational integrity by preventing processes from completing successfully if any warnings arise, thereby ensuring higher data quality, compliance standards, and system reliability. When enabled, it transforms non-critical warnings into blocking errors, which can halt workflows, trigger rollbacks, or initiate error handling routines. This behavior is particularly valuable in environments where data accuracy and regulatory adherence are paramount, but it may also increase failure rates and require more robust error management and user communication strategies.\n\n**Field behavior**\n- When set to true, any warning triggers an error state, halting or failing the operation immediately.\n- When set to false, warnings are logged or reported but do not interrupt the workflow or cause failure.\n- Influences validation, data processing, integration, and transactional workflows where warnings might otherwise be non-blocking.\n- Affects error reporting mechanisms and may alter the flow of retries, rollbacks, or aborts.\n- Can change the overall system behavior by elevating the severity of warnings to errors.\n- Impacts downstream processes by potentially preventing partial or inconsistent data states.\n\n**Implementation guidance**\n- Use to enforce strict data integrity and prevent continuation of processes with potential issues.\n- Assess the impact on user experience, as enabling this may increase failure rates and necessitate enhanced error handling and user notifications.\n- Ensure client applications and integrations are designed to handle errors resulting from escalated warnings gracefully.\n- Clearly document the default behavior when this flag is unset to avoid ambiguity for users and developers.\n- Coordinate with logging, monitoring, and alerting systems to provide clear diagnostics and traceability when warnings are treated as errors.\n- Consider providing configuration options at multiple levels (user, project, global) to allow flexible enforcement.\n- Test thoroughly in staging environments to understand the operational impact before enabling in production.\n\n**Examples**\n- warningAsError: true — Data import fails immediately if any warnings are detected during validation, preventing partial or potentially corrupt data ingestion.\n- warningAsError: false — Warnings during data validation are logged but do not stop the import process, allowing for continued operation despite minor issues.\n- warningAsError: true — Integration workflows abort on any warning to maintain strict compliance with regulatory or business rules.\n- warningAsError: false — Batch processing continues despite warnings, with issues flagged for later review.\n- warningAsError: true —"},"skipCustomMetadataRequests":{"type":"boolean","description":"Indicates whether to bypass requests for custom metadata during API operations, enabling optimized performance by reducing unnecessary data retrieval and processing when custom metadata is not required. This flag allows fine-tuning of the API's behavior to specific use cases by controlling the inclusion of potentially resource-intensive custom metadata. By skipping these requests, the system can achieve faster response times, lower bandwidth usage, and reduced computational overhead, especially beneficial in high-throughput or performance-sensitive scenarios. Conversely, including custom metadata ensures comprehensive data context, which is critical for operations requiring full validation, auditing, or enrichment.  \n**Field behavior:**  \n- When set to true, the system omits fetching and processing any custom metadata associated with the request, improving efficiency and reducing payload size.  \n- When set to false or left unspecified, the system retrieves and processes all relevant custom metadata, ensuring complete data context.  \n- Directly influences the amount of data transmitted and processed, impacting API responsiveness and resource utilization.  \n- Affects downstream workflows that depend on the presence or absence of custom metadata for decision-making or data integrity.  \n**Implementation guidance:**  \n- Employ this flag to optimize performance in scenarios where custom metadata is unnecessary, such as bulk data exports, read-only queries, or environments with stable metadata.  \n- Assess the potential impact on data completeness, validation, auditing, and enrichment to avoid compromising critical business logic.  \n- Clearly communicate the default behavior when the property is omitted to prevent misunderstandings among API consumers.  \n- Enforce boolean validation to maintain consistent and predictable API behavior.  \n- Consider how this setting interacts with caching mechanisms, data synchronization, and other metadata-related preferences to ensure coherent system behavior.  \n**Examples:**  \n- `true`: Skip all custom metadata requests to expedite processing and minimize resource consumption in performance-critical workflows.  \n- `false`: Include custom metadata requests to obtain full metadata details necessary for thorough data handling and validation.  \n**Important notes:**  \n- Skipping custom metadata may lead to incomplete data contexts if downstream processes rely on that metadata for essential functions like validation or auditing.  \n- This setting is most effective when metadata is static, infrequently changed, or irrelevant to the current operation.  \n- Altering this flag can influence caching strategies, data consistency, and synchronization across distributed systems.  \n- Ensure all relevant stakeholders understand the implications of toggling this flag to maintain data integrity and operational correctness.  \n**Dependency chain:**  \n- Relies on the existence and"}},"description":"An object encapsulating the user's customizable settings and options within the NetSuite environment, enabling a highly personalized and efficient user experience. This object encompasses a broad spectrum of configurable preferences including interface layout, language, timezone, notification methods, default currencies, date and time formats, themes, and other user-specific options that directly influence how the system behaves, appears, and interacts with the individual user. Preferences are designed to be flexible, supporting inheritance from role-based or system-wide defaults while allowing users to override settings to suit their unique workflows and requirements. The preferences object supports dynamic retrieval and partial updates, ensuring that changes can be made granularly without affecting unrelated settings, thereby maintaining data integrity and user experience consistency.\n\n**Field behavior**\n- Contains user-specific configuration settings that tailor the NetSuite experience to individual needs and roles.\n- Includes preferences related to UI layout, notification settings, default currencies, date/time formats, language, themes, and other personalization options.\n- Supports dynamic retrieval and partial updates, enabling users to modify individual preferences without overwriting the entire object.\n- Allows inheritance of preferences from role-based or system-wide defaults when user-specific settings are not explicitly defined.\n- Changes to preferences immediately affect the user interface, notification delivery, data presentation, and overall system behavior.\n- Preferences persist across sessions and devices, ensuring a consistent user experience.\n- Supports both simple scalar values and complex nested structures to accommodate diverse configuration needs.\n\n**Implementation guidance**\n- Structure as a nested JSON object with clearly defined and documented sub-properties grouped by categories such as notifications, display settings, localization, and system defaults.\n- Validate all input during updates to ensure data integrity, prevent invalid configurations, and maintain system stability.\n- Implement partial update mechanisms (e.g., PATCH semantics) to allow granular and efficient modifications.\n- Enforce strict access controls to ensure only authorized users can view or modify preferences.\n- Consider versioning the preferences schema to support backward compatibility and future enhancements.\n- Provide comprehensive documentation for each sub-property to facilitate correct usage, integration, and maintenance.\n- Optimize retrieval and update operations for performance, especially in environments with large user bases.\n- Ensure compatibility with role-based access controls and system-wide default settings to maintain coherent preference hierarchies.\n\n**Examples**\n- `{ \"language\": \"en-US\", \"timezone\": \"America/New_York\", \"currency\": \"USD\", \"notifications\": { \"email\": true, \"sms\": false } }`\n- `{ \"dashboardLayout\": \"compact\", \""},"file":{"type":"object","properties":{"name":{"type":"string","description":"The name of the file as it will be identified, stored, and managed within the NetSuite system. This filename serves as the primary unique identifier for the file within its designated folder or context, ensuring precise referencing, retrieval, and manipulation across various NetSuite modules, integrations, and user interfaces. It is critical that the filename is clear, descriptive, and relevant, as it is used not only for backend operations but also for display purposes in reports, dashboards, and file listings. The filename must strictly adhere to NetSuite’s naming conventions and restrictions to prevent conflicts, errors, or access issues, including limitations on character usage and length. Proper naming facilitates efficient file organization, version control, and seamless integration with external systems.\n\n**Field behavior**\n- Acts as the unique identifier for the file within its folder or storage context, preventing duplication.\n- Used extensively in file retrieval, linking, display, and management operations throughout NetSuite.\n- Must be unique within the folder to avoid overwriting or access conflicts.\n- Supports inclusion of standard file extensions to indicate file type and enable appropriate handling.\n- Case sensitivity may vary depending on the operating environment and integration specifics.\n- Represents only the filename without any directory or path information.\n- Changes to the filename after creation can impact existing references or integrations.\n\n**Implementation guidance**\n- Choose a concise, descriptive, and meaningful name to facilitate easy identification and management.\n- Avoid special characters, symbols, or reserved characters (e.g., \\ / : * ? \" < > |) disallowed by NetSuite’s file naming rules.\n- Ensure the filename length complies with NetSuite’s maximum character limit (typically 255 characters).\n- Always include the appropriate file extension (e.g., .pdf, .docx, .png) to clearly denote the file format.\n- Exclude any path separators, folder names, or hierarchical data—only the filename itself should be provided.\n- Adopt consistent naming conventions that support version control, categorization, or organizational standards.\n- Validate filenames programmatically where possible to enforce compliance and prevent errors.\n- Consider the impact of renaming files on existing workflows and references before making changes.\n\n**Examples**\n- \"invoice_2024_06.pdf\"\n- \"project_plan_v2.docx\"\n- \"company_logo.png\"\n- \"financial_report_Q1_2024.xlsx\"\n- \"user_manual_v1.3.pdf\"\n\n**Important notes**\n- The filename must not contain directory or path information; it strictly represents"},"fileType":{"type":"string","description":"The fileType property specifies the precise format or category of the file managed within the NetSuite file system, such as PDF, CSV, JPEG, or other supported types. This designation is essential because it directly affects how the system processes, displays, stores, and interacts with the file throughout its lifecycle. Correctly identifying the fileType ensures that files are rendered correctly in the user interface, processed through appropriate workflows, and subjected to any file-specific restrictions, permissions, or validations. It is typically required when uploading or creating new files to guarantee compatibility, prevent errors during file operations, and maintain data integrity. Additionally, the fileType influences system behaviors such as indexing, searchability, and integration with other NetSuite modules or external systems.\n\n**Field behavior**\n- Defines the file’s format or category, governing system handling, processing, and display.\n- Determines rendering, storage methods, and permissible manipulations within NetSuite.\n- Enables or restricts specific operations based on the file type’s inherent capabilities and limitations.\n- Usually mandatory during file creation or upload to ensure accurate system recognition and handling.\n- Influences validation checks, error handling, and workflow routing during file operations.\n- Affects integration points with other NetSuite features, such as reporting, scripting, or external API interactions.\n\n**Implementation guidance**\n- Validate the fileType against a predefined list of supported NetSuite file types to ensure compatibility and prevent errors.\n- Ensure the fileType accurately reflects the actual content format of the file to avoid processing failures or data corruption.\n- Prefer using standardized MIME types or NetSuite-specific identifiers where applicable for consistency and interoperability.\n- Implement robust error handling to gracefully manage unsupported, unrecognized, or mismatched file types.\n- Consider system versioning, configuration differences, and customizations that may affect supported file types.\n- When changing fileType post-creation, ensure appropriate reprocessing or format conversion to maintain data integrity.\n\n**Examples**\n- \"PDF\" for Portable Document Format files commonly used for documents.\n- \"CSV\" for Comma-Separated Values files frequently used for data exchange.\n- \"JPG\" or \"JPEG\" for image files in JPEG format.\n- \"PLAINTEXT\" for plain text files without formatting.\n- \"EXCEL\" for Microsoft Excel spreadsheet files.\n- \"XML\" for Extensible Markup Language files used in data interchange.\n- \"ZIP\" for compressed archive files.\n\n**Important notes**\n- Consistency between the fileType and the actual file content"},"folder":{"type":"string","description":"The identifier or path of the folder within the NetSuite file cabinet where the file is stored or intended to be stored. This property precisely defines the file's storage location, enabling organized management, categorization, and efficient retrieval within the NetSuite environment. It supports specification either as a unique numeric folder ID or as a string representing the folder path, accommodating both absolute and relative references depending on the API context. Proper assignment ensures files are correctly placed within the hierarchical folder structure, facilitating access control, streamlined file operations, and maintaining organizational consistency.\n\n**Field behavior**\n- Specifies the exact folder location for storing or moving the file within the NetSuite file cabinet.\n- Accepts either a numeric folder ID for unambiguous identification or a string folder path for hierarchical referencing.\n- Mandatory when uploading new files or relocating existing files to define their destination.\n- Optional during file metadata retrieval if folder context is implicit or not required.\n- Updating this property on an existing file triggers relocation to the specified folder.\n- Influences file visibility and access permissions based on folder-level security settings.\n- Supports both absolute and relative folder path formats depending on API usage context.\n\n**Implementation guidance**\n- Confirm the target folder exists and is accessible within the NetSuite file cabinet before assignment.\n- Prefer using the internal numeric folder ID to avoid ambiguity and ensure precise targeting.\n- Support both absolute and relative folder paths where the API context allows.\n- Enforce permission validation to verify that the user or integration has adequate rights to access or modify the folder.\n- Normalize folder paths to comply with NetSuite’s hierarchical structure and naming conventions.\n- Provide informative error responses if the folder is invalid, non-existent, or access is denied.\n- When moving files by updating this property, ensure that any dependent metadata or references are updated accordingly.\n\n**Examples**\n- \"123\" (numeric folder ID representing a specific folder)\n- \"/Documents/Invoices\" (string folder path indicating a nested folder structure)\n- \"456\" (numeric folder ID for a project-specific folder)\n- \"Shared/Marketing/Assets\" (relative folder path within the file cabinet hierarchy)\n\n**Important notes**\n- Folder IDs are unique within the NetSuite account and are the recommended method for precise folder referencing.\n- Folder paths must conform to NetSuite’s naming standards and reflect the correct folder hierarchy.\n- Changing the folder property on an existing file effectively moves the file within the file cabinet.\n- Adequate permissions are required to access or modify the specified folder;"},"folderInternalId":{"type":"string","description":"The internal identifier of the folder within the NetSuite file cabinet where the file is stored. This unique, system-generated integer ID precisely specifies the folder location, enabling accurate file operations such as upload, download, retrieval, and organization. It is essential for associating files with their respective folders and maintaining the hierarchical structure of the file cabinet, ensuring consistent file management and access control.\n\n**Field behavior**\n- Represents a unique, system-assigned internal ID for a folder in the NetSuite file cabinet.\n- Required to specify or retrieve the exact folder location during file management operations.\n- Must correspond to an existing folder within the NetSuite account to ensure valid file placement.\n- Changing this ID effectively moves the file to a different folder within the file cabinet.\n- Permissions on the folder influence access and operations on the associated files.\n- The field is mandatory when creating or updating a file to define its storage location.\n- Supports hierarchical folder structures by linking files to nested folders via their internal IDs.\n\n**Implementation guidance**\n- Validate that the folderInternalId is a valid integer and corresponds to an existing folder before use.\n- Retrieve valid folder internal IDs through NetSuite’s SuiteScript, REST API, or UI to avoid errors.\n- Implement error handling for cases where the folderInternalId is missing, invalid, or points to a restricted folder.\n- Update this ID carefully, as it changes the file’s folder location and may affect file accessibility.\n- Ensure that the user or integration has appropriate permissions to access or modify the target folder.\n- When moving files between folders, confirm that the destination folder supports the file type and intended use.\n- Cache or store folderInternalId values cautiously to prevent stale references in dynamic folder structures.\n\n**Examples**\n- 12345\n- 67890\n- 101112\n\n**Important notes**\n- This ID is distinct from the folder name; it is a system-generated unique identifier.\n- Modifying the folderInternalId moves the file to a different folder, impacting file organization.\n- Folder permissions and sharing settings can restrict or enable file operations based on this ID.\n- Accurate use of this ID is critical for maintaining the integrity of the file cabinet’s folder hierarchy.\n- The folderInternalId cannot be arbitrarily assigned; it must be obtained from NetSuite’s system.\n- Changes to folder structure or deletion of folders may invalidate existing folderInternalId references.\n\n**Dependency chain**\n- Depends on the existence and structure of folders within the NetSuite"},"internalId":{"type":"string","description":"Unique identifier assigned internally to a file within the NetSuite system, serving as the primary key for that file record. This identifier is essential for programmatic referencing and management of the file, enabling precise operations such as retrieval, update, or deletion. The internalId is immutable once assigned and guaranteed to be unique across all file records, ensuring accurate and consistent identification. It is automatically generated by NetSuite upon file creation and is consistently used across both REST and SOAP API endpoints to perform file-related actions. This identifier is critical for maintaining data integrity, enabling seamless integration, and supporting reliable file management workflows within the NetSuite ecosystem.\n\n**Field behavior**\n- Acts as the definitive and immutable identifier for a file within NetSuite.\n- Must be unique for each file record to prevent conflicts.\n- Required for all API operations targeting a specific file, including retrieval, updates, and deletions.\n- Cannot be manually assigned, duplicated, or reassigned to another file.\n- Remains constant throughout the lifecycle of the file record.\n\n**Implementation guidance**\n- Always retrieve the internalId from NetSuite responses after file creation or queries.\n- Use the internalId exclusively for all subsequent API calls involving the file.\n- Avoid any manual assignment or modification of the internalId to prevent inconsistencies.\n- Validate that the internalId conforms to NetSuite’s expected format before use in API calls.\n- Handle errors gracefully when an invalid or non-existent internalId is provided.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"1001\"\n\n**Important notes**\n- Providing an incorrect or non-existent internalId will result in API errors or failed operations.\n- The internalId is distinct and separate from the file name, file path, or any external identifiers.\n- It plays a critical role in maintaining data integrity and ensuring accurate file referencing within NetSuite.\n- The internalId cannot be reused or recycled for different files.\n\n**Dependency chain**\n- Requires the existence of the corresponding file record within NetSuite.\n- Utilized by API methods such as get, update, and delete for file management operations.\n- May be referenced by other NetSuite entities or records that link to the file.\n- Dependent on NetSuite’s internal database and indexing mechanisms for uniqueness and retrieval.\n\n**Technical details**\n- Represented as a string or numeric value depending on the API context.\n- Automatically assigned by NetSuite during the file creation process.\n- Stored as the primary key in the Net"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the unique internal identifier assigned by NetSuite to a specific folder within the file cabinet designated exclusively for storing backup files. This identifier is essential for accurately locating, managing, and organizing backup data within the system’s hierarchical file storage structure. It ensures that all backup operations—such as file creation, modification, and retrieval—are precisely targeted to the correct folder, thereby facilitating reliable data preservation, streamlined backup workflows, and efficient recovery processes.  \n**Field behavior:**  \n- Uniquely identifies the backup folder within the NetSuite file cabinet, eliminating ambiguity in folder selection.  \n- Used to specify or retrieve the exact storage location for backup files during automated or manual backup operations.  \n- Typically assigned and referenced during backup configuration, file management tasks, or programmatic interactions with the NetSuite API.  \n- Enables seamless integration with automated backup processes by directing files to the intended folder without manual intervention.  \n**Implementation guidance:**  \n- Verify that the internal ID corresponds to an existing, active, and accessible folder within the NetSuite file cabinet before use.  \n- Ensure the designated folder has appropriate permissions set to allow creation, modification, and retrieval of backup files.  \n- Avoid using internal IDs of folders that are restricted, archived, deleted, or otherwise unsuitable for active backup storage to prevent errors.  \n- Maintain consistent use of this ID across all API calls, scripts, and backup configurations to uphold data integrity and organizational consistency.  \n**Examples:**  \n- \"12345\" (example of a valid internal folder ID)  \n- \"67890\"  \n- \"34567\"  \n**Important notes:**  \n- This identifier is internal to NetSuite and does not correspond to external folder names, display names, or file system paths.  \n- Changing this ID will redirect backup files to a different folder, which may disrupt existing backup workflows or data organization.  \n- Proper folder permissions are essential to avoid backup failures, access denials, or data loss.  \n- Once set for a given backup configuration, this ID should be treated as immutable unless a deliberate and well-documented change is required.  \n**Dependency chain:**  \n- Requires the target folder to exist within the NetSuite file cabinet and be properly configured for backup storage.  \n- Interacts closely with backup configuration settings, file management APIs, and NetSuite’s permission and security models.  \n- Dependent on NetSuite’s internal folder management system to maintain uniqueness and integrity of folder"}},"description":"Represents a comprehensive file object within the NetSuite system, serving as a fundamental entity for uploading, storing, managing, and referencing a wide variety of file types including documents, images, spreadsheets, audio, video, and other media formats. This object facilitates seamless integration and association of file data with NetSuite records, transactions, custom entities, and workflows, enabling efficient document management, retrieval, and automation across the platform. It supports detailed metadata management such as file name, type, size, folder location, internal identifiers, encoding formats, and timestamps, ensuring precise control, traceability, and auditability of files. The file object can represent both files stored internally in the NetSuite file Cabinet and externally referenced files via URLs or other means, providing flexibility in file handling and access. It enforces platform-specific constraints on file size, type, and permissions to maintain system integrity, security, and compliance with organizational policies. Additionally, it supports versioning and updates to existing files, allowing for effective file lifecycle management within NetSuite.\n\n**Field behavior**\n- Used to specify, upload, retrieve, update, or link files associated with NetSuite records, transactions, custom entities, or workflows.\n- Supports a broad spectrum of file types including PDFs, images (JPEG, PNG, GIF), spreadsheets (Excel, CSV), text documents, audio, video, and other media formats.\n- Can represent files stored internally within the NetSuite file Cabinet or externally referenced files via URLs or other means.\n- Enables attachment of files to records to provide enhanced context, documentation, and audit trails.\n- Manages comprehensive file metadata including internalId, name, fileType, size, folder location, encoding, and timestamps.\n- Supports versioning and updates to existing files where applicable.\n- Enforces platform-specific restrictions on file size, type, and access permissions.\n- Facilitates secure file access and sharing based on user roles and permissions.\n\n**Implementation guidance**\n- Include complete and accurate metadata such as internalId, name, fileType, size, folder, encoding, and timestamps to ensure reliable file referencing and management.\n- Validate file types and sizes against NetSuite’s platform restrictions prior to upload to prevent errors and ensure compliance.\n- Use appropriate encoding methods (e.g., base64) for file content during API transmission to maintain data integrity.\n- Employ multipart/form-data encoding for file uploads when required by the API specifications.\n- Ensure users have the necessary roles and permissions to upload, retrieve, or modify files"}}},"NetsuiteDistributed":{"type":"object","description":"Configuration for NetSuite Distributed (SuiteApp 2.0) import operations.\nThis is the primary sub-schema for NetSuiteDistributedImport adaptorType.\nThe API field name is \"netsuite_da\".\n\n**Key fields**\n- operation: REQUIRED — the NetSuite operation (add, update, addupdate, etc.)\n- recordType: REQUIRED — the NetSuite record type (e.g., \"customer\", \"salesOrder\")\n- mapping: field mappings from source data to NetSuite fields\n- internalIdLookup: how to find existing records for update/addupdate/delete\n- restletVersion: defaults to \"suiteapp2.0\", rarely needs to be changed\n","properties":{"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"The NetSuite operation to perform. REQUIRED. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create new records only.\n- \"update\" — Update existing records. Requires internalIdLookup.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. Most common.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete records. Requires internalIdLookup.\n\n**Guidance**\n- Default to \"addupdate\" when user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when user says \"create\" or \"insert\" without mentioning updates.\n- For \"update\", \"addupdate\", and \"delete\", you MUST also set internalIdLookup.\n\n**Important**\nThis field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\nDo NOT send {\"type\": \"addupdate\"} — that causes a Cast error.\n"},"recordType":{"type":"string","description":"The NetSuite record type to import into. REQUIRED.\n\nMust match a valid NetSuite record type identifier.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"purchaseOrder\"\n- \"vendorBill\"\n- \"itemFulfillment\"\n- \"employee\"\n- \"inventoryItem\"\n- \"customrecord_myrecord\" (custom record types)\n"},"recordIdentifier":{"type":"string","description":"Custom record identifier, used to identify the specific record type when\nimporting into custom record types.\n"},"recordTypeId":{"type":"string","description":"The internal record type ID. Used for custom record types in NetSuite where\nthe numeric ID is needed in addition to the recordType string.\n"},"restletVersion":{"type":"string","enum":["suitebundle","suiteapp1.0","suiteapp2.0"],"description":"The version of the NetSuite RESTlet to use. This is a plain STRING, NOT an object.\n\nDefaults to \"suiteapp2.0\" when useSS2Restlets is true, \"suitebundle\" otherwise.\nAlmost always \"suiteapp2.0\" for modern integrations — rarely needs to be explicitly set.\n\nIMPORTANT: Set as a plain string value, e.g. \"suiteapp2.0\"\nDo NOT send {\"type\": \"suiteapp2.0\"} — that causes a Cast error.\n"},"useSS2Restlets":{"type":"boolean","description":"Whether to use SuiteScript 2.0 RESTlets. Defaults to true for modern integrations.\nWhen true, restletVersion defaults to \"suiteapp2.0\".\n"},"missingOrCorruptedDAConfig":{"type":"boolean","description":"Flag indicating whether the Distributed Adaptor configuration is missing or corrupted.\nSet by the system — do not set manually.\n"},"batchSize":{"type":"number","description":"Number of records to process per batch. Controls how many records are sent\nto NetSuite in a single API call. Typical values: 50-200.\n"},"internalIdLookup":{"type":"object","description":"Configuration for looking up existing NetSuite records by internal ID.\nREQUIRED when operation is \"update\", \"addupdate\", or \"delete\".\n\nDefines how to find existing records in NetSuite to match against incoming data.\n","properties":{"extract":{"type":"string","description":"The path in the source record to extract the lookup value from.\nExample: \"internalId\" or \"externalId\"\n"},"searchField":{"type":"string","description":"The NetSuite field to search against.\nExample: \"externalId\", \"email\", \"name\", \"tranId\"\n"},"operator":{"type":"string","description":"The comparison operator for the lookup.\nExample: \"is\", \"contains\", \"startswith\"\n"},"expression":{"type":"string","description":"A NetSuite search expression for complex lookup conditions.\nUsed for multi-field or conditional lookups.\n"}}},"hooks":{"type":"object","description":"Script hooks for custom processing at different stages of the import.\nEach hook references a SuiteScript file and function.\n","properties":{"preMap":{"type":"object","description":"Runs before field mapping is applied.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postMap":{"type":"object","description":"Runs after field mapping, before submission to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postSubmit":{"type":"object","description":"Runs after the record is submitted to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}}}},"mapping":{"type":"object","description":"Field mappings that define how source data fields map to NetSuite record fields.\nContains both body-level fields and sublist (line-item) fields.\n","properties":{"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the source record to extract the value from.\nUse dot notation for nested fields (e.g., \"address.city\").\n"},"generate":{"type":"string","description":"The NetSuite field ID to write the value to (e.g., \"companyname\", \"email\", \"subsidiary\").\n"},"hardCodedValue":{"type":"string","description":"A static value to always use instead of extracting from source data.\nMutually exclusive with extract.\n"},"lookupName":{"type":"string","description":"Reference to a lookup defined in netsuite_da.lookups by name."},"dataType":{"type":"string","description":"Data type hint for the field value (e.g., \"string\", \"number\", \"date\", \"boolean\").\n"},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"immutable":{"type":"boolean","description":"When true, this field is only set on record creation, not on updates."},"discardIfEmpty":{"type":"boolean","description":"When true, skip this field mapping if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value (e.g., \"MM/DD/YYYY\", \"ISO8601\").\nUsed to parse date strings from source data.\n"},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value (e.g., \"America/New_York\")."},"subRecordMapping":{"type":"object","description":"Nested mapping for NetSuite subrecord fields (e.g., address, inventory detail)."},"conditional":{"type":"object","description":"Conditional logic for when to apply this field mapping.\n","properties":{"lookupName":{"type":"string","description":"Lookup to evaluate for the condition."},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"],"description":"When to apply this mapping:\n- record_created: only on new records\n- record_updated: only on existing records\n- extract_not_empty: only when extracted value is non-empty\n- lookup_not_empty: only when lookup returns a value\n- lookup_empty: only when lookup returns no value\n- ignore_if_set: skip if field already has a value\n"}}}}},"description":"Body-level field mappings. Each entry maps a source field to a NetSuite body field.\n"},"lists":{"type":"array","items":{"type":"object","properties":{"generate":{"type":"string","description":"The NetSuite sublist ID (e.g., \"item\" for sales order line items,\n\"addressbook\" for address sublists).\n"},"jsonPath":{"type":"string","description":"JSON path in the source data that contains the array of sublist records.\n"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the sublist record to extract the value from."},"generate":{"type":"string","description":"NetSuite sublist field ID to write to."},"hardCodedValue":{"type":"string","description":"Static value for this sublist field."},"lookupName":{"type":"string","description":"Reference to a lookup by name."},"dataType":{"type":"string","description":"Data type hint for the field value."},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"isKey":{"type":"boolean","description":"Whether this field is a key field for matching existing sublist lines."},"immutable":{"type":"boolean","description":"Only set on record creation, not updates."},"discardIfEmpty":{"type":"boolean","description":"Skip if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value."},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value."},"subRecordMapping":{"type":"object","description":"Nested mapping for subrecord fields within the sublist."},"conditional":{"type":"object","properties":{"lookupName":{"type":"string"},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"]}}}}},"description":"Field mappings for each column in the sublist."}}},"description":"Sublist (line-item) mappings. Each entry maps source data to a NetSuite sublist.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique name for this lookup, referenced by lookupName in field mappings.\n"},"recordType":{"type":"string","description":"NetSuite record type to search (e.g., \"customer\", \"item\").\n"},"searchField":{"type":"string","description":"NetSuite field to search against (e.g., \"email\", \"externalId\", \"name\").\n"},"resultField":{"type":"string","description":"Field from the lookup result to return (e.g., \"internalId\").\n"},"expression":{"type":"string","description":"NetSuite search expression for complex lookup conditions.\n"},"operator":{"type":"string","description":"Comparison operator (e.g., \"is\", \"contains\", \"startswith\").\n"},"includeInactive":{"type":"boolean","description":"Whether to include inactive records in lookup results."},"useDefaultOnMultipleMatches":{"type":"boolean","description":"When true, use the default value if multiple records match the lookup."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results (key-value pairs)."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup definitions used to resolve reference values from NetSuite.\nReferenced by name from field mappings via the lookupName property.\n"},"rawOverride":{"type":"object","description":"Raw override object for advanced use cases. When useRawOverride is true,\nthis object is sent directly to the NetSuite API, bypassing normal mapping.\n"},"useRawOverride":{"type":"boolean","description":"When true, uses rawOverride instead of the normal mapping configuration.\n"},"isMigrated":{"type":"boolean","description":"System flag indicating whether this import was migrated from a legacy format.\n"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"When true, silently skip read-only fields instead of raising errors."},"warningAsError":{"type":"boolean","description":"When true, treat NetSuite warnings as errors that stop the import."},"skipCustomMetadataRequests":{"type":"boolean","description":"When true, skip fetching custom field metadata to improve performance."}},"description":"Import behavior preferences that control how NetSuite handles the import operation.\n"},"retryUpdateAsAdd":{"type":"boolean","description":"When true, if an update fails because the record doesn't exist, automatically\nretry as an add operation. Useful for initial syncs where records may not exist yet.\n"},"isFileProvider":{"type":"boolean","description":"Whether this import handles files in the NetSuite File Cabinet.\n"},"file":{"type":"object","description":"File cabinet configuration for file-based imports into NetSuite.\n","properties":{"name":{"type":"string","description":"Filename for the file in NetSuite File Cabinet."},"fileType":{"type":"string","description":"NetSuite file type (e.g., \"PDF\", \"CSV\", \"PLAINTEXT\", \"EXCEL\", \"XML\").\n"},"folder":{"type":"string","description":"Folder path or name in the NetSuite File Cabinet."},"folderInternalId":{"type":"string","description":"Internal ID of the target folder in NetSuite File Cabinet."},"internalId":{"type":"string","description":"Internal ID of an existing file to update."},"backupFolderInternalId":{"type":"string","description":"Internal ID of a backup folder for file versioning."}}},"customFieldMetadata":{"type":"object","description":"Metadata about custom fields on the target record type.\nPopulated by the system from NetSuite metadata — do not set manually.\n"}}},"Rdbms":{"type":"object","description":"Configuration for RDBMS import operations. Used for database imports into SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, and other relational databases.\n\n**Query type determines which fields are required**\n\n| queryType          | Required fields                | Do NOT set        |\n|--------------------|--------------------------------|-------------------|\n| [\"per_record\"]     | query (array of SQL strings)   | bulkInsert        |\n| [\"per_page\"]       | query (array of SQL strings)   | bulkInsert        |\n| [\"bulk_insert\"]    | bulkInsert object              | query             |\n| [\"bulk_load\"]      | bulkLoad object                | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"MERGE INTO target USING ...\"]\nWrong: \"MERGE INTO target ...\"\nWrong: [{\"query\": \"MERGE INTO target ...\"}]\n","properties":{"lookups":{"type":["string","null"],"description":"A collection of predefined reference data sets or key-value pairs designed to standardize, validate, and streamline input values across the application. These lookup entries serve as a centralized repository of commonly used values, enabling consistent data entry, minimizing errors, and facilitating efficient data retrieval and processing. They are essential for populating user interface elements such as dropdown menus, autocomplete fields, and for enforcing validation rules within business logic. Lookup data can be static or dynamically updated, and may support localization to accommodate diverse user bases. Additionally, lookups often include metadata such as descriptions, effective dates, and status indicators to provide context and support lifecycle management. They play a critical role in maintaining data integrity, enhancing user experience, and ensuring interoperability across different system modules and external integrations.\n\n**Field behavior**\n- Contains structured sets of reference data including codes, labels, enumerations, or mappings.\n- Drives UI components by providing selectable options and autocomplete suggestions.\n- Ensures data consistency by standardizing input values across different modules.\n- Supports both static and dynamic updates to reflect changes in business requirements.\n- May include metadata like descriptions, effective dates, or status indicators for enhanced context.\n- Facilitates localization and internationalization to support multiple languages and regions.\n- Enables validation logic by restricting inputs to predefined acceptable values.\n- Supports versioning to track changes and maintain historical data integrity.\n\n**Implementation guidance**\n- Organize lookup data as dictionaries, arrays, or database tables for efficient access and management.\n- Implement caching strategies to optimize performance and reduce redundant data retrieval.\n- Enforce uniqueness and relevance of lookup entries within their specific contexts.\n- Provide robust mechanisms for updating, versioning, and extending lookup data without disrupting system stability.\n- Incorporate localization and internationalization support for user-facing lookup values.\n- Secure sensitive lookup data with appropriate access controls and auditing.\n- Design APIs or interfaces for easy retrieval and management of lookup data.\n- Ensure synchronization of lookup data across distributed systems or microservices.\n\n**Examples**\n- Country codes and names: {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n- Status codes representing entity states: {\"1\": \"Active\", \"2\": \"Inactive\", \"3\": \"Pending\"}\n- Product categories for e-commerce platforms: {\"ELEC\": \"Electronics\", \"FASH\": \"Fashion\", \"HOME\": \"Home & Garden\"}\n- Payment methods: {\"CC\": \"Credit Card\", \"PP\": \"PayPal\", \"BT\":"},"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"], [\"per_page\"], or [\"first_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars to inject values from incoming records. RDBMS queries REQUIRE the `record.` prefix and triple braces `{{{ }}}`.\n\n**Format — array of strings**\nThis field is an ARRAY OF STRINGS, not a single string and not an array of objects.\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n- WRONG: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]  (missing record. prefix — will produce empty values)\n\n**Critical:** RDBMS HANDLEBARS SYNTAX\n- MUST use `record.` prefix: `{{{record.fieldName}}}`, NOT `{{{fieldName}}}`\n- MUST use triple braces `{{{ }}}` for value references — in RDBMS context, `{{record.field}}` outputs `'value'` (wrapped in single quotes), while `{{{record.field}}}` outputs `value` (raw). Triple braces give you control over quoting in your SQL.\n- Block helpers (`{{#each}}`, `{{#if}}`, `{{/each}}`) use double braces as normal.\n- Nested fields: `{{{record.properties.email}}}`\n\n**Examples**\n- INSERT: [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{{record.quantity}}} WHERE sku = '{{{record.sku}}}'\"]\n- UPSERT (MySQL): [\"INSERT INTO users (id, name) VALUES ({{{record.id}}}, '{{{record.name}}}') ON DUPLICATE KEY UPDATE name = '{{{record.name}}}'\"]\n- MERGE (Snowflake): [\"MERGE INTO target USING (SELECT '{{{record.email}}}' AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = '{{{record.name}}}' WHEN NOT MATCHED THEN INSERT (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{{record.name}}}'\n- Numbers: no quotes — {{{record.quantity}}}\n"},"queryType":{"type":["array","null"],"items":{"type":"string","enum":["INSERT","UPDATE","bulk_insert","per_record","per_page","first_page","bulk_load"]},"description":"Specifies the execution strategy for the SQL operation.\n\n**CRITICAL: This field DETERMINES whether `query` or `bulkInsert` is required.**\n\n**Preference order (use the highest-performance option the operation supports)**\n\n1. **`[\"bulk_load\"]`** — fastest. Stages data as a file then loads via COPY/bulk mechanism. Supports upsert via `primaryKeys`, and custom merge/ignore logic via `overrideMergeQuery`. **Currently Snowflake and NSAW.**\n2. **`[\"bulk_insert\"]`** — batch INSERT via multi-row VALUES. No upsert/ignore logic. All RDBMS types.\n3. **`[\"per_page\"]`** — one SQL statement per page of records. All RDBMS types.\n4. **`[\"per_record\"]`** — one SQL statement per record. Full control over individual record SQL. All RDBMS types.\n\nAlways prefer the highest option that satisfies the operation. `bulk_load` handles INSERT, UPSERT, and custom merge logic. `bulk_insert` is best for pure INSERTs when `bulk_load` isn't available. `per_page` handles custom SQL at batch level. `per_record` gives full control over individual record SQL.\n\n**Decision tree (Follow in order)**\n\n**STEP 1: Does the database support bulk_load?**\n- If Snowflake, NSAW, or another database with bulk_load support:\n  → **USE `[\"bulk_load\"]`** with:\n    - `bulkLoad.tableName` (required)\n    - `bulkLoad.primaryKeys` for upsert/merge (auto-generates MERGE)\n    - `bulkLoad.overrideMergeQuery: true` for custom SQL (ignore-existing, conditional updates, multi-table ops). Override SQL references `{{import.rdbms.bulkLoad.preMergeTemporaryTable}}` for the staging table.\n    - No `primaryKeys` for pure INSERT (auto-generates INSERT)\n\n**STEP 2: Database does NOT support bulk_load**\n- **Pure INSERT**: → **USE `[\"bulk_insert\"]`** (requires `bulkInsert` object)\n- **UPDATE / UPSERT / custom matching logic**: → **USE `[\"per_page\"]`** (requires `query` field). Only use `[\"per_record\"]` if the logic truly cannot be expressed as a batch operation.\n\n**Critical relationship to other fields**\n\n| queryType        | REQUIRES         | DO NOT SET       |\n|------------------|------------------|------------------|\n| `[\"per_record\"]` | `query` field    | `bulkInsert`     |\n| `[\"bulk_insert\"]`| `bulkInsert` obj | `query`          |\n\n**Do not use**\n- `[\"INSERT\"]` or `[\"UPDATE\"]` as standalone values\n- Multiple types combined\n\n**Examples**\n\n**Insert with ignoreExisting (MUST use per_record)**\nPrompt: \"Import customers, ignore existing based on email\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n```\n\n**Pure insert (no duplicate checking)**\nPrompt: \"Insert all records into users table\"\n```json\n\"queryType\": [\"bulk_insert\"],\n\"bulkInsert\": { ... }\n```\n\n**Update Operation**\nPrompt: \"Update inventory counts\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"UPDATE inventory SET qty = {{{record.qty}}} WHERE sku = '{{{record.sku}}}'\"]\n```\n\n**Run Once Per Page**\nPrompt: \"Execute this stored procedure for each page of results\"\n```json\n\"queryType\": [\"per_page\"]\n```\n**per_page uses a different Handlebars context than per_record.** The context is the entire batch (`batch_of_records`), not a single `record.` object. Loop through with `{{#each batch_of_records}}...{{{record.fieldName}}}...{{/each}}`.\n\n**Run Once at Start**\nPrompt: \"Run this cleanup script before processing\"\n```json\n\"queryType\": [\"first_page\"]\n```"},"bulkInsert":{"type":"object","description":"**CRITICAL: This object is ONLY used when `queryType` is `[\"bulk_insert\"]`.**\n\n✅ **SET THIS** when:\n- `queryType` is `[\"bulk_insert\"]`\n- Pure INSERT operation with NO ignoreExisting/duplicate checking\n\n❌ **DO NOT SET THIS** when:\n- `queryType` is `[\"per_record\"]` (use `query` field instead)\n- Operation involves UPDATE, UPSERT, or ignoreExisting logic\n\nIf the prompt mentions \"ignore existing\", \"skip duplicates\", or \"match on field\",\nuse `queryType: [\"per_record\"]` with the `query` field instead of this object.","properties":{"tableName":{"type":"string","description":"The name of the database table into which the bulk insert operation will be executed. This value must correspond to a valid, existing table within the target relational database management system (RDBMS). It serves as the primary destination for inserting multiple rows of data efficiently in a single operation. The table name can include schema or namespace qualifiers if supported by the database (e.g., \"schemaName.tableName\"), allowing precise targeting within complex database structures. Proper validation and sanitization of this value are essential to ensure the operation's success and to prevent SQL injection or other security vulnerabilities.\n\n**Field behavior**\n- Specifies the exact destination table for the bulk insert operation.\n- Must correspond to an existing table in the database schema.\n- Case sensitivity and naming conventions depend on the underlying RDBMS.\n- Supports schema-qualified names where applicable.\n- Used directly in the SQL `INSERT INTO` statement.\n- Influences how data is mapped and inserted during the bulk operation.\n\n**Implementation guidance**\n- Verify the existence and accessibility of the table in the target database before execution.\n- Ensure compliance with the RDBMS naming rules, including reserved keywords, allowed characters, and maximum length.\n- Support and correctly handle schema-qualified table names, respecting database-specific syntax.\n- Sanitize and validate input rigorously to prevent SQL injection and other security risks.\n- Apply appropriate quoting or escaping mechanisms based on the RDBMS (e.g., backticks for MySQL, double quotes for PostgreSQL).\n- Consider the impact of case sensitivity, especially when dealing with quoted identifiers.\n\n**Examples**\n- \"users\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"sales_data_2024\"\n- \"dbo.CustomerRecords\"\n\n**Important notes**\n- The target table must exist and have the appropriate schema and permissions to accept bulk inserts.\n- Incorrect, misspelled, or non-existent table names will cause the operation to fail.\n- Case sensitivity varies by database; for example, PostgreSQL treats quoted identifiers as case-sensitive.\n- Avoid using unvalidated dynamic input to mitigate security vulnerabilities.\n- Schema qualifiers should be used consistently to avoid ambiguity in multi-schema environments.\n\n**Dependency chain**\n- Relies on the database connection and authentication configuration.\n- Interacts with other bulkInsert parameters such as column mappings and data payload.\n- May be influenced by transaction management, locking mechanisms, and database constraints during the insert.\n- Dependent on the database's metadata for validation and existence checks.\n\n**Techn**"},"batchSize":{"type":"string","description":"The number of records to be inserted into the database in a single batch during a bulk insert operation. This parameter is crucial for optimizing the performance and efficiency of bulk data loading by controlling how many records are grouped together before being sent to the database. Proper tuning of batchSize balances memory consumption, transaction overhead, and throughput, enabling the system to handle large volumes of data efficiently without overwhelming resources or causing timeouts. Adjusting batchSize directly impacts transaction size, network utilization, error handling granularity, and recovery strategies, making it essential to tailor this value based on the specific database capabilities, system resources, and workload characteristics.\n\n**Field behavior**\n- Specifies the exact count of records processed and committed in a single batch during bulk insert operations.\n- Determines the frequency and size of database transactions, influencing overall throughput, latency, and system responsiveness.\n- Larger batch sizes can improve throughput but may increase memory usage, transaction duration, and risk of timeouts or locks.\n- Smaller batch sizes reduce memory footprint and transaction time but may increase the total number of transactions and associated overhead.\n- Defines the scope of error handling, as failures typically affect only the current batch, allowing for partial retries or rollbacks.\n- Controls the granularity of commit points, impacting rollback, recovery, and consistency strategies in case of failures.\n\n**Implementation guidance**\n- Choose a batch size that respects the database’s transaction limits, available system memory, and network conditions.\n- Conduct benchmarking and load testing with different batch sizes to identify the optimal balance for your environment.\n- Monitor system performance continuously and adjust batchSize dynamically if supported, to adapt to varying workloads.\n- Implement comprehensive error handling to manage partial batch failures, including retry logic or compensating transactions.\n- Verify compatibility with database drivers, ORM frameworks, or middleware, which may impose constraints or optimizations on batch sizes.\n- Consider the size, complexity, and serialization overhead of individual records, as larger or more complex records may necessitate smaller batches.\n- Factor in network latency and bandwidth to optimize data transfer efficiency and reduce potential bottlenecks.\n\n**Examples**\n- 1000: Suitable for moderate bulk insert operations, balancing speed and resource consumption effectively.\n- 50000: Ideal for high-throughput environments with ample memory and finely tuned database configurations.\n- 100: Appropriate for systems with limited memory or where minimizing transaction size and duration is critical.\n- 5000: A common default batch size providing a good compromise between performance and resource usage.\n\n**Important**"}}},"bulkLoad":{"type":"object","properties":{"tableName":{"type":"string","description":"The name of the target table within the relational database management system (RDBMS) where the bulk load operation will insert, update, or merge data. This property specifies the exact destination table for the bulk data operation and must correspond to a valid, existing table in the database schema. It can include schema qualifiers if supported by the database (e.g., schema.tableName), and must adhere to the naming conventions and case sensitivity rules of the target RDBMS. Proper specification of this property is critical to ensure data is loaded into the correct location without errors or unintended data modification. Accurate definition of the table name helps maintain data integrity and prevents operational failures during bulk load processes.\n\n**Field behavior**\n- Identifies the specific table in the database to receive the bulk-loaded data.\n- Must reference an existing table accessible with the current database credentials.\n- Supports schema-qualified names where applicable (e.g., schema.tableName).\n- Used as the target for insert, update, or merge operations during bulk load.\n- Case sensitivity and naming conventions depend on the underlying database system.\n- Determines the scope and context of the bulk load operation within the database.\n- Influences transactional behavior, locking, and concurrency controls during the operation.\n\n**Implementation guidance**\n- Verify the target table exists and the executing user has sufficient permissions for data modification.\n- Ensure the table name respects the case sensitivity and naming rules of the RDBMS.\n- Avoid reserved keywords or special characters unless properly escaped or quoted according to database requirements.\n- Support fully qualified names including schema or database prefixes if required to avoid ambiguity.\n- Validate compatibility between the table schema and the incoming data to prevent type mismatches or constraint violations.\n- Consider transactional behavior, locking, and concurrency implications on the target table during bulk load.\n- Test the bulk load operation in a development or staging environment to confirm correct table targeting.\n- Implement error handling to catch and report issues related to invalid or inaccessible table names.\n\n**Examples**\n- \"customers\"\n- \"sales_data_2024\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"dbo.EmployeeRecords\"\n\n**Important notes**\n- Incorrect or misspelled table names will cause the bulk load operation to fail.\n- The target table schema must be compatible with the incoming data structure to avoid errors.\n- Bulk loading may overwrite, append, or merge data depending on the load mode; ensure this aligns with business requirements.\n- Some databases require quoting or escaping of table names, especially if they"},"primaryKeys":{"type":["string","null"],"description":"primaryKeys specifies the list of column names that uniquely identify each record in the target relational database table during a bulk load operation. These keys are essential for maintaining data integrity by enforcing uniqueness constraints and enabling precise identification of rows for operations such as inserts, updates, upserts, or conflict resolution. Properly defining primaryKeys ensures that duplicate records are detected and handled appropriately, preventing data inconsistencies and supporting efficient data merging processes. This property supports both single-column and composite primary keys, requiring the exact column names as defined in the target schema, and plays a critical role in guiding the bulk load mechanism to correctly match and manipulate records based on their unique identifiers.  \n**Field behavior:**  \n- Defines one or more columns that collectively serve as the unique identifier for each row in the table.  \n- Enforces uniqueness constraints during bulk load operations to prevent duplicate entries.  \n- Facilitates detection and resolution of conflicts or duplicates during data insertion or updating.  \n- Influences the behavior of upsert, merge, or conflict resolution mechanisms in the bulk load process.  \n- Ensures that each record can be reliably matched and updated based on the specified keys.  \n- Supports both single-column and composite keys, maintaining the order of columns as per the database schema.  \n**Implementation guidance:**  \n- Include all columns that form the complete primary key, especially for composite keys, maintaining the correct order as defined in the database schema.  \n- Ensure column names exactly match those in the target database, respecting case sensitivity where applicable.  \n- Validate that all specified primary key columns exist in both the source data and the target table schema before initiating the bulk load.  \n- Avoid leaving the primaryKeys list empty when uniqueness enforcement or conflict resolution is required.  \n- Consider the immutability and stability of primary key columns to prevent inconsistencies during repeated load operations.  \n- Confirm that the primary key columns are indexed or constrained appropriately in the target database to optimize performance.  \n**Examples:**  \n- [\"id\"] — single-column primary key.  \n- [\"order_id\", \"product_id\"] — composite primary key consisting of two columns.  \n- [\"user_id\", \"timestamp\"] — composite key combining user identifier and timestamp for uniqueness.  \n- [\"customer_id\", \"account_number\", \"region\"] — multi-column composite primary key.  \n**Important notes:**  \n- Omitting primaryKeys when required can result in data duplication, failed loads, or incorrect conflict handling.  \n- The order of"},"overrideMergeQuery":{"type":"boolean","description":"A custom SQL query string that fully overrides the default merge operation executed during bulk load processes in a relational database management system (RDBMS). This property empowers users to define precise, tailored merge logic that governs how records are inserted, updated, or deleted in the target database table when handling large-scale data operations. By specifying this query, users can implement complex matching conditions, custom conflict resolution strategies, additional filtering criteria, or conditional logic that surpasses the system-generated default behavior, ensuring the bulk load aligns perfectly with specific business rules, data integrity requirements, or performance optimizations.\n\n**Field behavior**\n- When provided, this query completely replaces the standard merge statement used in bulk load operations.\n- Supports detailed customization of merge logic, including custom join conditions, conditional updates, selective inserts, and optional deletes.\n- If omitted, the system automatically generates and executes a default merge query based on the schema, keys, and data mappings.\n- The query must be syntactically compatible with the target RDBMS and support any required parameterization for dynamic data injection.\n- Executed within the transactional context of the bulk load to maintain atomicity, consistency, and rollback capabilities in case of failure.\n- The system expects the query to handle all necessary merge scenarios to avoid partial or inconsistent data states.\n- Overrides apply globally for the bulk load operation, affecting all records processed in that batch.\n\n**Implementation guidance**\n- Validate the custom SQL syntax thoroughly before execution to prevent runtime errors and ensure compatibility with the target RDBMS.\n- Ensure the query comprehensively addresses all required merge operations—insert, update, and optionally delete—to maintain data integrity.\n- Support parameter placeholders or bind variables if the system injects dynamic values during execution, and document their usage clearly.\n- Provide users with clear documentation, templates, or examples outlining the expected query structure, required clauses, and best practices.\n- Implement robust safeguards against SQL injection and other security vulnerabilities when accepting and executing custom queries.\n- Test the custom query extensively in a controlled staging environment to verify correctness, performance, and side effects before deploying to production.\n- Consider transaction isolation levels and locking behavior to avoid deadlocks or contention during bulk load operations.\n- Encourage users to include comprehensive error handling and logging within the query or surrounding execution context to facilitate troubleshooting.\n\n**Examples**\n- A MERGE statement that updates existing records based on a composite primary key and inserts new records when no match is found.\n- A merge query incorporating additional WHERE clauses to exclude certain"}},"description":"Specifies whether bulk loading is enabled for relational database management system (RDBMS) operations, allowing for the efficient insertion of large volumes of data through a single, optimized operation. Enabling bulkLoad significantly improves performance during data import, migration, or batch processing by minimizing the overhead associated with individual row inserts and leveraging database-specific bulk insert mechanisms. This feature is particularly beneficial for initial data loads, large-scale migrations, or periodic batch updates where speed, resource efficiency, and reduced transaction time are critical. Bulk loading may temporarily alter database behavior—such as disabling indexes, constraints, or triggers—to maximize throughput, and often requires elevated permissions and careful management of transactional integrity to ensure data consistency. Proper use of bulkLoad can lead to substantial reductions in processing time and system resource consumption during large data operations.\n\n**Field behavior**\n- When set to true, the system uses optimized bulk loading techniques to insert data in large batches, greatly enhancing throughput and efficiency.\n- When false or omitted, data insertion defaults to standard row-by-row operations, which may be slower and consume more resources.\n- Typically enabled during scenarios involving initial data population, large batch imports, or data migration processes.\n- May temporarily disable or defer enforcement of indexes, constraints, and triggers to improve performance during the bulk load operation.\n- Can affect database locking and concurrency, potentially locking tables or partitions for the duration of the bulk load.\n- Bulk loading operations may bypass certain transactional controls, affecting rollback and error recovery behavior.\n\n**Implementation guidance**\n- Verify that the target RDBMS supports bulk loading and understand its specific syntax, capabilities, and limitations.\n- Assess transaction management implications, as bulk loading may alter or bypass triggers, constraints, and rollback mechanisms.\n- Implement robust error handling and post-load validation to ensure data integrity and consistency after bulk operations.\n- Monitor system resources such as CPU, memory, and I/O throughput during bulk load to avoid performance bottlenecks or outages.\n- Plan for potential impacts on database availability, locking behavior, and concurrent access during bulk load execution.\n- Ensure data is properly staged and preprocessed to meet the format and requirements of the bulk loading mechanism.\n- Coordinate bulk loading with maintenance windows or low-traffic periods to minimize disruption.\n\n**Examples**\n- `bulkLoad: true` — Enables bulk loading to accelerate insertion of large datasets efficiently.\n- `bulkLoad: false` — Disables bulk loading, performing inserts using standard row-by-row methods.\n- `bulkLoad` omitted — Defaults"},"updateLookupName":{"type":"string","description":"Specifies the exact name of the lookup table or entity within the relational database management system (RDBMS) that is targeted for update operations. This property serves as a precise identifier to determine which lookup data set should be modified, ensuring accurate and efficient targeting within the database schema. It typically corresponds to a physical table name or a logical entity name defined in the database and must strictly align with the existing schema to enable successful updates without errors or unintended side effects. Proper use of this property is critical for maintaining data integrity during update operations on lookup data, as it directly influences which records are affected. Accurate specification helps prevent accidental data corruption and supports clear, maintainable database interactions.\n\n**Field behavior**\n- Defines the specific lookup table or entity to be updated during an operation.\n- Acts as a key identifier for the update process to locate the correct data set.\n- Must be provided when performing update operations on lookup data.\n- Influences the scope and effect of the update by specifying the target entity.\n- Changes to this value directly affect which data is modified in the database.\n- Invalid or missing values will cause update operations to fail or target unintended data.\n\n**Implementation guidance**\n- Ensure the value exactly matches the existing lookup table or entity name in the database schema.\n- Validate against the RDBMS naming conventions, including allowed characters, length restrictions, and case sensitivity.\n- Maintain consistent naming conventions across the application to prevent ambiguity and errors.\n- Account for case sensitivity based on the underlying database system’s configuration and collation settings.\n- Avoid using reserved keywords, special characters, or whitespace that may cause conflicts or syntax errors.\n- Implement validation checks to catch misspellings or invalid names before executing update operations.\n- Incorporate error handling to manage cases where the specified lookup name does not exist or is inaccessible.\n\n**Examples**\n- \"country_codes\"\n- \"user_roles\"\n- \"product_categories\"\n- \"status_lookup\"\n- \"department_list\"\n\n**Important notes**\n- Providing an incorrect or misspelled name will result in failed update operations or runtime errors.\n- This field must not be left empty when an update operation is intended.\n- It is only relevant and required when performing update operations on lookup tables or entities.\n- Changes to this value should be carefully managed to avoid unintended data modifications or corruption.\n- Ensure appropriate permissions exist for updating the specified lookup entity to prevent authorization failures.\n- Consistent use of this property across environments (development, staging, production) is essential"},"updateExtract":{"type":"string","description":"Specifies the comprehensive configuration and parameters for updating an existing data extract within a relational database management system (RDBMS). This property defines how the extract's data is modified, supporting various update strategies such as incremental updates, full refreshes, conditional changes, or merges. It ensures that the extract remains accurate, consistent, and aligned with business rules and data integrity requirements by controlling the scope, method, and transactional behavior of update operations. Additionally, it allows for fine-grained control through filters, timestamps, and criteria to selectively update portions of the extract, while managing concurrency and maintaining data consistency throughout the process.\n\n**Field behavior**\n- Determines the update strategy applied to an existing data extract, including incremental, full refresh, conditional, or merge operations.\n- Controls how new data interacts with existing extract data—whether by overwriting, appending, or merging.\n- Supports selective updates using filters, timestamps, or conditional criteria to target specific subsets of data.\n- Manages transactional integrity to ensure updates are atomic, consistent, isolated, and durable (ACID-compliant).\n- Coordinates update execution to prevent conflicts, data corruption, or partial updates.\n- Enables configuration of error handling, logging, and rollback mechanisms during update processes.\n- Handles concurrency control to avoid race conditions and ensure data consistency in multi-user environments.\n- Allows scheduling and triggering of update operations based on time, events, or external signals.\n\n**Implementation guidance**\n- Ensure update configurations comply with organizational data governance, security, and integrity policies.\n- Validate all input parameters and update conditions to avoid inconsistent or partial data modifications.\n- Implement robust transactional support to allow rollback on failure and maintain data consistency.\n- Incorporate detailed logging and error reporting to facilitate monitoring, auditing, and troubleshooting.\n- Optimize update methods based on data volume, change frequency, and performance requirements, balancing between incremental and full refresh approaches.\n- Tailor update logic to leverage specific capabilities and constraints of the target RDBMS, including locking and concurrency controls.\n- Coordinate update timing and execution with downstream systems and data consumers to minimize disruption.\n- Design update processes to be idempotent where possible to support safe retries and recovery.\n- Consider the impact of update latency on data freshness and downstream analytics.\n\n**Examples**\n- Configuring an incremental update that uses a last-modified timestamp column to append only new or changed records to the extract.\n- Defining a full refresh update that completely replaces the existing extract data with a newly extracted dataset.\n- Setting up"},"ignoreLookupName":{"type":"string","description":"Specifies whether the lookup name should be ignored during relational database management system (RDBMS) operations, such as query generation, data retrieval, or relationship resolution. When enabled, this flag instructs the system to bypass the use of the lookup name, which can alter how relationships, joins, or references are resolved within database queries. This behavior is particularly useful in scenarios where the lookup name is redundant, introduces unnecessary complexity, or causes performance overhead. Additionally, it allows for alternative identification methods to be prioritized over the lookup name, enabling more flexible or optimized query strategies.\n\n**Field behavior**\n- When set to true, the system excludes the lookup name from all relevant database operations, including query construction, join conditions, and filtering criteria.\n- When false or omitted, the lookup name is actively utilized to resolve references, enforce relationships, and optimize data retrieval.\n- Determines whether explicit lookup names override or supplement default naming conventions, schema-based identifiers, or inferred relationships.\n- Affects how related data is fetched, potentially influencing join strategies, query plans, and lookup optimizations.\n- Impacts the generation of SQL or other query languages by controlling the inclusion of lookup name references.\n\n**Implementation guidance**\n- Enable this flag to enhance query performance by skipping unnecessary lookup name resolution when it is known to be non-essential or redundant.\n- Thoroughly evaluate the impact on data integrity and correctness to ensure that ignoring the lookup name does not result in incomplete, inaccurate, or inconsistent query results.\n- Validate all downstream processes, components, and integrations that depend on lookup names to prevent breaking dependencies or causing data inconsistencies.\n- Consider the underlying database schema design, naming conventions, and relationship mappings before enabling this flag to avoid unintended side effects.\n- Integrate this flag within query builders, ORM layers, data access modules, or middleware to conditionally include or exclude lookup names during query generation and execution.\n- Implement comprehensive testing and monitoring to detect any adverse effects on application behavior or data retrieval accuracy when this flag is toggled.\n\n**Examples**\n- `ignoreLookupName: true` — The system bypasses the lookup name, generating queries without referencing it, which may simplify query logic and improve execution speed.\n- `ignoreLookupName: false` — The lookup name is included in query logic, ensuring that relationships and references are resolved using the defined lookup identifiers.\n- Omission of the property defaults to `false`, meaning lookup names are considered and used unless explicitly ignored.\n\n**Important notes**\n- Ign"},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from the relational database management system (RDBMS) should be entirely skipped. When set to true, the system bypasses the data extraction step, which is particularly useful in scenarios where data extraction is managed externally, has already been completed, or is unnecessary for the current operation. This flag directly controls whether the extraction engine initiates the retrieval of data from the source database, thereby influencing all subsequent stages that depend on the extracted data, such as transformation, validation, and loading. Proper use of this flag ensures flexibility in workflows by allowing integration with external data pipelines or pre-extracted datasets without redundant extraction efforts.\n\n**Field behavior**\n- Determines if the data extraction phase from the RDBMS is executed or omitted.\n- When true, no data is pulled from the source database, effectively skipping extraction.\n- When false or omitted, the extraction process runs normally, retrieving data as configured.\n- Influences downstream processes such as data transformation, validation, and loading that depend on extracted data.\n- Must be explicitly set to true to skip extraction; otherwise, extraction proceeds by default.\n- Impacts the overall data pipeline flow by potentially altering the availability of fresh data.\n\n**Implementation guidance**\n- Default value should be false to ensure extraction occurs unless intentionally overridden.\n- Validate input strictly as a boolean to prevent misconfiguration.\n- Ensure that skipping extraction does not cause failures or data inconsistencies in subsequent pipeline stages.\n- Use this flag in workflows where extraction is handled outside the current system or when working with pre-extracted datasets.\n- Incorporate checks or safeguards to confirm that necessary data is available from alternative sources when extraction is skipped.\n- Log or notify when extraction is skipped to maintain transparency in data processing workflows.\n- Coordinate with other system components to handle scenarios where extraction is bypassed, ensuring smooth pipeline execution.\n\n**Examples**\n- `ignoreExtract: true` — Extraction step is completely bypassed.\n- `ignoreExtract: false` — Extraction step is performed as usual.\n- Field omitted — Defaults to false, so extraction occurs normally.\n- Used in a pipeline where data is pre-loaded from a file or external system, setting `ignoreExtract: true` to avoid redundant extraction.\n\n**Important notes**\n- Setting this flag to true assumes that the system has access to the required data through other means; otherwise, downstream processes may fail or produce incomplete results.\n- Incorrect use can lead to missing data, causing errors or inconsistencies in the data pipeline."}}},"S3":{"type":"object","description":"Configuration for S3 exports","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is physically located, representing the specific geographical area within AWS's global infrastructure that hosts the bucket. This designation directly affects data access latency, availability, redundancy, and compliance with regional data governance, privacy laws, and residency requirements. Selecting the appropriate region is critical for optimizing performance, minimizing costs related to data transfer and storage, and ensuring adherence to legal and organizational policies. The region must be specified using a valid AWS region identifier that accurately corresponds to the bucket's actual location to avoid connectivity issues, authentication failures, and improper routing of API requests. Proper region selection also influences disaster recovery strategies, service availability, and integration with other AWS services, making it a foundational parameter in S3 bucket configuration and management.\n\n**Field behavior**\n- Defines the physical and logical location of the S3 bucket within AWS's global network.\n- Directly impacts data access speed, latency, and throughput.\n- Determines compliance with regional data sovereignty, privacy, and security regulations.\n- Must be a valid AWS region code that matches the bucket’s deployment location.\n- Influences availability zones, fault tolerance, and disaster recovery planning.\n- Affects endpoint URL construction and API request routing.\n- Plays a role in cost optimization by affecting storage pricing and data transfer fees.\n- Governs integration capabilities with other AWS services available in the selected region.\n\n**Implementation guidance**\n- Use official AWS region codes such as \"us-east-1\", \"eu-west-2\", or \"ap-southeast-2\" as defined by AWS.\n- Validate the region value against the current list of supported AWS regions to prevent misconfiguration.\n- Ensure the specified region exactly matches the bucket’s physical location to avoid access and authentication errors.\n- Consider cost implications including storage pricing, data transfer fees, and cross-region replication expenses.\n- When migrating or replicating buckets across regions, update this property accordingly and plan for data transfer and downtime.\n- Verify service availability and feature support in the chosen region before deployment.\n- Regularly review AWS region updates and deprecations to maintain compatibility.\n- Use region-specific endpoints when constructing API requests to ensure proper routing.\n\n**Examples**\n- us-east-1 (Northern Virginia, USA)\n- eu-central-1 (Frankfurt, Germany)\n- ap-southeast-2 (Sydney, Australia)\n- sa-east-1 (São Paulo, Brazil)\n- ca-central-1 (Canada Central)\n\n**Important notes**\n- Incorrect or mismatched region specification"},"bucket":{"type":"string","description":"The name of the Amazon S3 bucket where data or objects are stored or will be stored. This bucket serves as the primary container within the AWS S3 service, uniquely identifying the storage location for your data across all AWS accounts globally. The bucket name must strictly comply with AWS naming conventions, including being DNS-compliant, entirely lowercase, and free of underscores, uppercase letters, or IP address-like formats, ensuring global uniqueness and compatibility. It is essential that the specified bucket already exists and that the user or service has the necessary permissions to access, upload, or modify its contents. Selecting the appropriate bucket, particularly with regard to its AWS region, can significantly impact application performance, cost efficiency, and adherence to data residency and regulatory requirements.\n\n**Field behavior**\n- Specifies the target S3 bucket for all data storage and retrieval operations.\n- Must reference an existing bucket with proper access permissions.\n- Acts as a globally unique identifier within the AWS S3 ecosystem.\n- Immutable after initial assignment; changing the bucket requires specifying a new bucket name.\n- Influences data locality, latency, cost, and compliance based on the bucket’s region.\n\n**Implementation guidance**\n- Validate bucket names against AWS S3 naming rules: lowercase letters, numbers, and hyphens only; no underscores, uppercase letters, or IP address formats.\n- Confirm the bucket’s existence and verify access permissions before performing operations.\n- Consider the bucket’s AWS region to optimize latency, cost, and compliance with data governance policies.\n- Implement comprehensive error handling for scenarios such as bucket not found, access denied, or insufficient permissions.\n- Align bucket selection with organizational policies and regulatory requirements concerning data storage and residency.\n\n**Examples**\n- \"my-app-data-bucket\"\n- \"user-uploads-2024\"\n- \"company-backups-eu-west-1\"\n\n**Important notes**\n- Bucket names are globally unique across all AWS accounts and cannot be reused or renamed once created.\n- Access control is managed through AWS IAM roles, policies, and bucket policies, which must be correctly configured.\n- The bucket’s region selection is critical for optimizing performance, minimizing costs, and ensuring legal compliance.\n- Bucket naming restrictions prohibit uppercase letters, underscores, and IP address-like formats to maintain DNS compliance.\n\n**Dependency chain**\n- Depends on the availability and stability of the AWS S3 service.\n- Requires valid AWS credentials with appropriate permissions to access or modify the bucket.\n- Typically used alongside object keys, region configurations,"},"fileKey":{"type":"string","description":"The unique identifier or path string that specifies the exact location of a file stored within an Amazon S3 bucket. This key acts as the primary reference for locating, accessing, managing, and manipulating a specific file within the storage system. It can be a simple filename or a hierarchical path that mimics folder structures to logically organize files within the bucket. The fileKey is case-sensitive and must be unique within the bucket to prevent conflicts or overwriting. It supports UTF-8 encoded characters and can include delimiters such as slashes (\"/\") to simulate directories, enabling structured file organization. Proper encoding and adherence to AWS naming conventions are essential to ensure seamless integration with S3 APIs and web requests. Additionally, the fileKey plays a critical role in access control, lifecycle policies, and event notifications within the S3 environment.\n\n**Field behavior**\n- Uniquely identifies a file within the S3 bucket namespace.\n- Serves as the primary reference in all file-related operations such as retrieval, update, deletion, and metadata management.\n- Case-sensitive and must be unique to avoid overwriting or conflicts.\n- Supports folder-like delimiters (e.g., slashes) to simulate directory structures for logical organization.\n- Used as a key parameter in all relevant S3 API operations, including GetObject, PutObject, DeleteObject, and CopyObject.\n- Influences access permissions and lifecycle policies when used with prefixes or specific key patterns.\n\n**Implementation guidance**\n- Use a UTF-8 encoded string that accurately reflects the file’s intended path or name.\n- Avoid unsupported or problematic characters; strictly follow S3 key naming conventions to prevent errors.\n- Incorporate logical folder structures using delimiters for better organization and maintainability.\n- Ensure URL encoding when the key is included in web requests or URLs to handle special characters properly.\n- Validate that the key length does not exceed S3’s maximum allowed size (up to 1024 bytes).\n- Consider the impact of key naming on access control policies, lifecycle rules, and event triggers.\n\n**Examples**\n- \"documents/report2024.pdf\"\n- \"images/profile/user123.png\"\n- \"backups/2023/12/backup.zip\"\n- \"videos/2024/events/conference.mp4\"\n- \"logs/2024/06/15/server.log\"\n\n**Important notes**\n- The fileKey is case-sensitive; for example, \"File.txt\" and \"file.txt\" are distinct keys.\n- Changing the fileKey"},"backupBucket":{"type":"string","description":"The name of the Amazon S3 bucket designated for securely storing backup data. This bucket serves as the primary repository for saving copies of critical information, ensuring data durability, integrity, and availability for recovery in the event of data loss, corruption, or disaster. It must be a valid, existing bucket within the AWS environment, configured with appropriate permissions and security settings to facilitate reliable and efficient backup operations. Proper configuration of this bucket—including enabling versioning to preserve historical backup states, applying encryption to protect data at rest, and setting lifecycle policies to manage storage costs and data retention—is essential to optimize backup management, maintain compliance with organizational and regulatory requirements, and control expenses. The bucket should ideally reside in the same AWS region as the backup source to minimize latency and data transfer costs, and may incorporate cross-region replication for enhanced disaster recovery capabilities. Additionally, the bucket should be monitored regularly for access patterns, storage usage, and security compliance to ensure ongoing protection and operational efficiency.\n\n**Field behavior**\n- Specifies the exact S3 bucket where backup data will be written and stored.\n- Must reference a valid, existing bucket within the AWS account and region.\n- Used exclusively during backup processes to save data snapshots or incremental backups.\n- Requires appropriate write permissions to allow backup services to upload data.\n- Typically remains static unless backup storage strategies or requirements change.\n- Supports integration with backup scheduling and monitoring systems.\n- Facilitates data recovery by maintaining backup copies in a secure, durable location.\n\n**Implementation guidance**\n- Verify that the bucket name adheres to AWS S3 naming conventions and is DNS-compliant.\n- Confirm the bucket exists and is accessible by the backup service or user.\n- Set up IAM roles and bucket policies to grant necessary write and read permissions securely.\n- Enable bucket versioning to maintain historical backup versions and support data recovery.\n- Implement lifecycle policies to manage storage costs by archiving or deleting old backups.\n- Apply encryption (e.g., AWS KMS) to protect backup data at rest and meet compliance requirements.\n- Consider enabling cross-region replication for disaster recovery scenarios.\n- Regularly audit bucket permissions and access logs to ensure security compliance.\n- Monitor storage usage and costs to optimize resource allocation and prevent unexpected charges.\n- Align bucket configuration with organizational policies for data retention, security, and compliance.\n\n**Examples**\n- \"my-app-backups\"\n- \"company-data-backup-2024\"\n- \"prod-environment-backup-bucket\"\n- \"s3-backups-region1"},"serverSideEncryptionType":{"type":"string","description":"serverSideEncryptionType specifies the type of server-side encryption applied to objects stored in the S3 bucket, determining how data at rest is securely protected by the storage service. This property defines the encryption algorithm or method used by the server to automatically encrypt data upon storage, ensuring confidentiality, integrity, and compliance with security standards. It accepts predefined values that correspond to supported encryption mechanisms, influencing how data is encrypted, accessed, and managed during storage and retrieval operations. By configuring this property, users can enforce encryption policies that safeguard sensitive information without requiring client-side encryption management, thereby simplifying security administration and enhancing data protection.\n\n**Field behavior**\n- Defines the encryption algorithm or method used by the server to encrypt data at rest.\n- Controls whether server-side encryption is enabled or disabled for stored objects.\n- Accepts specific predefined values corresponding to supported encryption types such as AES256 or aws:kms.\n- Influences data handling during storage and retrieval to maintain data confidentiality and integrity.\n- Applies encryption automatically without requiring client-side encryption management.\n- Determines compliance with organizational and regulatory encryption requirements.\n- Affects how encryption keys are managed, accessed, and audited.\n- Impacts access control and permissions related to encrypted objects.\n- May affect performance and cost depending on the encryption method chosen.\n\n**Implementation guidance**\n- Validate the input value against supported encryption types, primarily \"AES256\" and \"aws:kms\".\n- Ensure the selected encryption type is compatible with the S3 bucket’s configuration, policies, and permissions.\n- When using \"aws:kms\", specify the appropriate AWS KMS key ID if required, and verify key permissions.\n- Gracefully handle scenarios where encryption is not specified, disabled, or set to an empty value.\n- Provide clear and actionable error messages if an unsupported or invalid encryption type is supplied.\n- Consider the impact on performance and cost when enabling encryption, especially with KMS-managed keys.\n- Document encryption settings clearly for users to understand the security posture of stored data.\n- Implement fallback or default behaviors when encryption settings are omitted.\n- Ensure that encryption settings align with audit and compliance reporting requirements.\n- Coordinate with key management policies to maintain secure key lifecycle and rotation.\n- Test encryption settings thoroughly to confirm data is encrypted and accessible as expected.\n\n**Examples**\n- \"AES256\" — Server-side encryption using Amazon S3-managed encryption keys (SSE-S3).\n- \"aws:kms\" — Server-side encryption using AWS Key Management Service-managed keys (SSE-KMS"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Specifies the exact name of the function to be invoked within the wrapper context, enabling dynamic and flexible execution of different functionalities based on the provided value. This property acts as a critical key identifier that directs the wrapper to call a specific function, facilitating modular, reusable, and maintainable code design. The function name must correspond to a valid, accessible, and callable function within the wrapper's scope, ensuring that the intended operation is performed accurately and efficiently. By allowing the function to be selected at runtime, this mechanism supports dynamic behavior, adaptable workflows, and context-sensitive execution paths, enhancing the overall flexibility and scalability of the system.\n\n**Field behavior**\n- Determines which specific function the wrapper will execute during its operation.\n- Accepts a string value representing the exact name of the target function.\n- Must match a valid and accessible function within the wrapper’s execution context.\n- Influences the wrapper’s behavior and output based on the selected function.\n- Supports dynamic assignment to enable flexible and context-dependent function calls.\n- Enables runtime selection of functionality, allowing for versatile and adaptive processing.\n\n**Implementation guidance**\n- Validate that the provided function name exists and is callable within the wrapper environment before invocation.\n- Ensure the function name is supplied as a string and adheres to the naming conventions used in the wrapper’s codebase.\n- Implement robust error handling to gracefully manage cases where the specified function does not exist, is inaccessible, or fails during execution.\n- Sanitize the input to prevent injection attacks, code injection, or other security vulnerabilities.\n- Allow dynamic updates to this property to support runtime changes in function execution without requiring system restarts.\n- Log function invocation attempts and outcomes for auditing and debugging purposes.\n- Consider implementing a whitelist or registry of allowed function names to enhance security and control.\n\n**Examples**\n- \"initialize\"\n- \"processData\"\n- \"renderOutput\"\n- \"cleanupResources\"\n- \"fetchUserDetails\"\n- \"validateInput\"\n- \"exportReport\"\n- \"authenticateUser\"\n\n**Important notes**\n- The function name is case-sensitive and must precisely match the function identifier in the underlying implementation.\n- Misspelled or incorrect function names will cause invocation errors or runtime failures.\n- This property specifies only the function name; any parameters or arguments for the function should be provided separately.\n- The wrapper context must be properly initialized and configured to recognize and execute the specified function.\n- Changes to this property can significantly alter the behavior of the wrapper, so modifications should be managed carefully"},"configuration":{"type":"object","description":"The `configuration` property defines a comprehensive and centralized set of parameters, settings, and operational directives that govern the behavior, functionality, and characteristics of the wrapper component. It acts as the primary container for all customizable options that influence how the wrapper initializes, runs, and adapts to various deployment environments or user requirements. This includes environment-specific configurations, feature toggles, resource limits, integration endpoints, security credentials, logging preferences, and other critical controls. By encapsulating these diverse settings, the configuration property enables fine-grained control over the wrapper’s runtime behavior, supports dynamic adaptability, and facilitates consistent management across different scenarios.\n\n**Field behavior**\n- Encapsulates all relevant configurable options affecting the wrapper’s operation and lifecycle.\n- Supports complex nested structures or key-value pairs to represent hierarchical and interrelated settings.\n- Can be mandatory or optional depending on the wrapper’s design and operational context.\n- Changes to configuration typically influence initialization, runtime behavior, and potentially require component restart or reinitialization.\n- May support dynamic updates if the wrapper is designed for live reconfiguration without downtime.\n- Acts as a single source of truth for operational parameters, ensuring consistency and predictability.\n\n**Implementation guidance**\n- Organize configuration parameters logically, grouping related settings to improve readability and maintainability.\n- Implement robust validation mechanisms to enforce correct data types, value ranges, and adherence to constraints.\n- Provide sensible default values for optional parameters to enhance usability and prevent misconfiguration.\n- Design the configuration schema to be extensible and backward-compatible, allowing seamless future enhancements.\n- Secure sensitive information within the configuration using encryption, secure storage, or exclusion from logs and error messages.\n- Thoroughly document each configuration option, including its purpose, valid values, default behavior, and impact on the wrapper.\n- Consider environment-specific overrides or profiles to facilitate deployment flexibility.\n- Ensure configuration changes are atomic and consistent to avoid partial or invalid states.\n\n**Examples**\n- `{ \"timeout\": 5000, \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"environment\": \"production\", \"apiEndpoint\": \"https://api.example.com\", \"features\": { \"beta\": false } }`\n- Nested objects specifying database connection details, authentication credentials, UI themes, third-party service integrations, or resource limits.\n- Feature flags enabling or disabling experimental capabilities dynamically.\n- Security parameters such as OAuth tokens, encryption keys, or access control lists.\n\n**Important notes**\n- Comprehensive and clear documentation of all"},"lookups":{"type":"object","properties":{"type":{"type":"string","description":"type: >\n  Specifies the category or classification of the lookup within the wrapper context.\n  This property defines the nature or kind of lookup being performed or referenced.\n  **Field behavior**\n  - Determines the specific type of lookup operation or data classification.\n  - Influences how the lookup data is processed or interpreted.\n  - May restrict or enable certain values or options based on the type selected.\n  **Implementation guidance**\n  - Use clear and consistent naming conventions for different types.\n  - Validate the value against a predefined set of allowed types to ensure data integrity.\n  - Ensure that the type aligns with the corresponding lookup logic or data source.\n  **Examples**\n  - \"userRole\"\n  - \"productCategory\"\n  - \"statusCode\"\n  - \"regionCode\"\n  **Important notes**\n  - The type value is critical for correctly resolving and handling lookup data.\n  - Changing the type may affect downstream processing or data retrieval.\n  - Ensure compatibility with other related fields or components that depend on this type.\n  **Dependency chain**\n  - Depends on the wrapper context to provide scope.\n  - Influences the selection and retrieval of lookup values.\n  - May be linked to validation schemas or business logic modules.\n  **Technical details**\n  - Typically represented as a string.\n  - Should conform to a controlled vocabulary or enumeration where applicable.\n  - Case sensitivity may apply depending on implementation."},"_lookupCacheId":{"type":"string","format":"objectId","description":"_lookupCacheId: Identifier used to reference a specific cache entry for lookup operations within the wrapper's lookup mechanism.\n**Field behavior**\n- Serves as a unique identifier for cached lookup data.\n- Used to retrieve or update cached lookup results efficiently.\n- Helps in minimizing redundant lookup operations by reusing cached data.\n- Typically remains constant for the lifespan of the cached data entry.\n**Implementation guidance**\n- Should be generated to ensure uniqueness within the scope of the wrapper's lookups.\n- Must be consistent and stable to correctly reference the intended cache entry.\n- Should be invalidated or updated when the underlying lookup data changes.\n- Can be a string or numeric identifier depending on the caching system design.\n**Examples**\n- \"userRolesCache_v1\"\n- \"productLookupCache_2024_06\"\n- \"regionCodesCache_42\"\n**Important notes**\n- Proper management of _lookupCacheId is critical to avoid stale or incorrect lookup data.\n- Changing the _lookupCacheId without updating the cache content can lead to lookup failures.\n- Should be used in conjunction with cache expiration or invalidation strategies.\n**Dependency chain**\n- Depends on the wrapper.lookups system to manage cached lookup data.\n- May interact with cache storage or memory management components.\n- Influences lookup operation performance and data consistency.\n**Technical details**\n- Typically implemented as a string identifier.\n- May be used as a key in key-value cache stores.\n- Should be designed to avoid collisions with other cache identifiers.\n- May include versioning or timestamp information to manage cache lifecycle."},"extract":{"type":"string","description":"Extract: Specifies the extraction criteria or pattern used to retrieve specific data from a given input within the lookup process.\n**Field behavior**\n- Defines the rules or patterns to identify and extract relevant information from input data.\n- Can be a string, regular expression, or structured query depending on the implementation.\n- Used during the lookup operation to isolate the desired subset of data.\n- May support multiple extraction patterns or conditional extraction logic.\n**Implementation guidance**\n- Ensure the extraction pattern is precise to avoid incorrect or partial data retrieval.\n- Validate the extraction criteria against sample input data to confirm accuracy.\n- Support common pattern formats such as regex for flexible and powerful extraction.\n- Provide clear error handling if the extraction pattern fails or returns no results.\n- Allow for optional extraction parameters to customize behavior (e.g., case sensitivity).\n**Examples**\n- A regex pattern like `\"\\d{4}-\\d{2}-\\d{2}\"` to extract dates in YYYY-MM-DD format.\n- A JSONPath expression such as `$.store.book[*].author` to extract authors from a JSON object.\n- A simple substring like `\"ERROR:\"` to extract error messages from logs.\n- XPath query to extract nodes from XML data.\n**Important notes**\n- The extraction logic directly impacts the accuracy and relevance of the lookup results.\n- Complex extraction patterns may affect performance; optimize where possible.\n- Extraction should handle edge cases such as missing fields or unexpected data formats gracefully.\n- Document supported extraction pattern types and syntax clearly for users.\n**Dependency chain**\n- Depends on the input data format and structure to define appropriate extraction criteria.\n- Works in conjunction with the lookup mechanism that applies the extraction.\n- May interact with validation or transformation steps post-extraction.\n**Technical details**\n- Typically implemented using pattern matching libraries or query language parsers.\n- May require escaping or encoding special characters within extraction patterns.\n- Should support configurable options like multiline matching or case sensitivity.\n- Extraction results are usually returned as strings, arrays, or structured objects depending on context."},"map":{"type":"object","description":"map: >\n  A mapping object that defines key-value pairs used for lookup operations within the wrapper context.\n  **Field behavior**\n  - Serves as a dictionary or associative array for translating or mapping input keys to corresponding output values.\n  - Used to customize or override default lookup values dynamically.\n  - Supports string keys and values, but may also support other data types depending on implementation.\n  **Implementation guidance**\n  - Ensure keys are unique within the map to avoid conflicts.\n  - Validate that values conform to expected formats or types required by the lookup logic.\n  - Consider immutability or controlled updates to prevent unintended side effects during runtime.\n  - Provide clear error handling for missing or invalid keys during lookup operations.\n  **Examples**\n  - {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n  - {\"error404\": \"Not Found\", \"error500\": \"Internal Server Error\"}\n  - {\"env\": \"production\", \"version\": \"1.2.3\"}\n  **Important notes**\n  - The map is context-specific and should be populated according to the needs of the wrapper's lookup functionality.\n  - Large maps may impact performance; optimize size and access patterns accordingly.\n  - Changes to the map may require reinitialization or refresh of dependent components.\n  **Dependency chain**\n  - Depends on the wrapper.lookups context for proper integration.\n  - May be referenced by other properties or methods performing lookup operations.\n  **Technical details**\n  - Typically implemented as a JSON object or dictionary data structure.\n  - Keys and values are usually strings but can be extended to other serializable types.\n  - Should support efficient retrieval, ideally O(1) time complexity for lookups."},"default":{"type":["object","null"],"description":"The default property specifies the fallback or initial value to be used within the wrapper's lookups configuration when no other specific value is provided or applicable. This ensures that the system has a predefined baseline value to operate with, preventing errors or undefined behavior.\n\n**Field behavior**\n- Acts as the fallback value in the lookup process.\n- Used when no other matching lookup value is found.\n- Provides a baseline or initial value for the wrapper configuration.\n- Ensures consistent behavior by avoiding null or undefined states.\n\n**Implementation guidance**\n- Define a sensible and valid default value that aligns with the expected data type and usage context.\n- Ensure the default value is compatible with other dependent fields or processes.\n- Update the default value cautiously, as it impacts the fallback behavior system-wide.\n- Validate the default value during configuration loading to prevent runtime errors.\n\n**Examples**\n- A string value such as \"N/A\" or \"unknown\" for textual lookups.\n- A numeric value like 0 or -1 for numerical lookups.\n- A boolean value such as false when a true/false default is needed.\n- An object or dictionary representing a default configuration set.\n\n**Important notes**\n- The default value should be meaningful and appropriate to avoid misleading results.\n- Overriding the default value may affect downstream logic relying on fallback behavior.\n- Absence of a default value might lead to errors or unexpected behavior in the lookup process.\n- Consider localization or context-specific defaults if applicable.\n\n**Dependency chain**\n- Used by the wrapper.lookups mechanism during value resolution.\n- May influence or be influenced by other lookup-related properties or configurations.\n- Interacts with error handling or fallback logic in the system.\n\n**Technical details**\n- Data type should match the expected type of lookup values (string, number, boolean, object, etc.).\n- Stored and accessed as part of the wrapper's lookup configuration object.\n- Should be immutable during runtime unless explicitly reconfigured.\n- May be serialized/deserialized as part of configuration files or API payloads."},"allowFailures":{"type":"boolean","description":"allowFailures: Indicates whether the system should permit failures during the lookup process without aborting the entire operation.\n**Field behavior**\n- When set to true, individual lookup failures are tolerated, allowing the process to continue.\n- When set to false, any failure in the lookup process causes the entire operation to fail.\n- Helps in scenarios where partial results are acceptable or expected.\n**Implementation guidance**\n- Use this flag to control error handling behavior in lookup operations.\n- Ensure that downstream processes can handle partial or incomplete data if allowFailures is true.\n- Log or track failures when allowFailures is enabled to aid in debugging and monitoring.\n**Examples**\n- allowFailures: true — The system continues processing even if some lookups fail.\n- allowFailures: false — The system stops and reports an error immediately upon a lookup failure.\n**Important notes**\n- Enabling allowFailures may result in incomplete or partial data sets.\n- Use with caution in critical systems where data integrity is paramount.\n- Consider combining with retry mechanisms or fallback strategies.\n**Dependency chain**\n- Depends on the lookup operation within wrapper.lookups.\n- May affect error handling and result aggregation components.\n**Technical details**\n- Boolean value: true or false.\n- Default behavior should be defined explicitly to avoid ambiguity.\n- Should be checked before executing lookup calls to determine error handling flow."},"_id":{"type":"object","description":"Unique identifier for the lookup entry within the wrapper context.\n**Field behavior**\n- Serves as the primary key to uniquely identify each lookup entry.\n- Immutable once assigned to ensure consistent referencing.\n- Used to retrieve, update, or delete specific lookup entries.\n**Implementation guidance**\n- Should be generated using a globally unique identifier (e.g., UUID or ObjectId).\n- Must be indexed in the database for efficient querying.\n- Should be validated to ensure uniqueness within the wrapper.lookups collection.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"a1b2c3d4e5f6789012345678\"\n**Important notes**\n- This field is mandatory for every lookup entry.\n- Changing the _id after creation can lead to data inconsistency.\n- Should not contain sensitive or personally identifiable information.\n**Dependency chain**\n- Used by other fields or services that reference lookup entries.\n- May be linked to foreign keys or references in related collections or tables.\n**Technical details**\n- Typically stored as a string or ObjectId type depending on the database.\n- Must conform to the format and length constraints of the chosen identifier scheme.\n- Should be generated server-side to prevent collisions."},"cLocked":{"type":"object","description":"cLocked indicates whether the item is currently locked, preventing modifications or deletions.\n**Field behavior**\n- Represents the lock status of an item within the system.\n- When set to true, the item is considered locked and cannot be edited or deleted.\n- When set to false, the item is unlocked and available for modifications.\n- Typically used to enforce data integrity and prevent concurrent conflicting changes.\n**Implementation guidance**\n- Should be a boolean value: true or false.\n- Ensure that any operation attempting to modify or delete the item checks the cLocked status first.\n- Update the cLocked status appropriately when locking or unlocking the item.\n- Consider integrating with user permissions to control who can change the lock status.\n**Examples**\n- cLocked: true (The item is locked and cannot be changed.)\n- cLocked: false (The item is unlocked and editable.)\n**Important notes**\n- Locking an item does not necessarily mean it is read-only; it depends on the system's enforcement.\n- The lock status should be clearly communicated to users to avoid confusion.\n- Changes to cLocked should be logged for audit purposes.\n**Dependency chain**\n- May depend on user roles or permissions to set or unset the lock.\n- Other fields or operations may check cLocked before proceeding.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false unless specified otherwise.\n- Stored as part of the lookup or wrapper object in the data model.\n- Should be efficiently queryable to enforce lock checks during operations."}},"description":"A collection of key-value pairs that serve as a centralized reference dictionary to map or translate specific identifiers, codes, or keys into meaningful, human-readable, or contextually relevant values within the wrapper object. This property facilitates data normalization, standardization, and clarity across various components of the application by providing descriptive labels, metadata, or explanatory information associated with each key. It enhances data interpretation, validation, and presentation by enabling consistent and seamless conversion of coded data into understandable formats, supporting both simple and complex hierarchical mappings as needed.\n\n**Field behavior**\n- Functions as a lookup dictionary to resolve codes or keys into descriptive, user-friendly values.\n- Supports consistent data representation and interpretation throughout the application.\n- Commonly accessed during data processing, validation, or UI rendering to enhance readability.\n- May support hierarchical or nested mappings for complex relationships.\n- Typically remains static or changes infrequently but should reflect the latest valid mappings.\n\n**Implementation guidance**\n- Structure as an object, map, or dictionary with unique, well-defined keys and corresponding descriptive values.\n- Ensure keys are consistent, unambiguous, and conform to expected formats or standards.\n- Values should be concise, clear, and informative to facilitate easy understanding by end-users or systems.\n- Support nested or multi-level lookups if the domain requires hierarchical mapping.\n- Validate lookup data integrity regularly to prevent missing, outdated, or incorrect mappings.\n- Consider immutability or concurrency controls if the lookup data is accessed or modified concurrently.\n- Optimize for efficient retrieval, aiming for constant-time (O(1)) lookup performance.\n\n**Examples**\n- Mapping ISO country codes to full country names, e.g., {\"US\": \"United States\", \"FR\": \"France\"}.\n- Translating HTTP status codes to descriptive messages, e.g., {\"200\": \"OK\", \"404\": \"Not Found\"}.\n- Converting product or SKU identifiers to product names within an inventory management system.\n- Mapping error codes to user-friendly error descriptions.\n- Associating role IDs with role names in an access control system.\n\n**Important notes**\n- Keep the lookup collection current to accurately reflect any updates in codes or their meanings.\n- Avoid including sensitive, confidential, or personally identifiable information within lookup values.\n- Monitor and manage the size of the lookup collection to prevent performance degradation.\n- Ensure thread-safety or appropriate synchronization mechanisms if the lookups are mutable and accessed concurrently.\n- Changes to lookup data can have widespread effects on data interpretation and user interface behavior."}}},"Salesforce":{"type":"object","description":"Configuration for Salesforce imports containing operation type, API selection, and object-specific settings.\n\n**IMPORTANT:** This schema defines ONLY the Salesforce-specific configuration properties. Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level.\n\nWhen asked to generate this configuration, return an object with these properties:\n- `operation` (REQUIRED): The Salesforce operation type (insert, update, upsert, etc.)\n- `api` (REQUIRED): The Salesforce API to use (default: \"soap\")\n- `sObjectType`: The Salesforce object type (e.g., \"Account\", \"Contact\", \"Vendor__c\")\n- `idLookup`: Configuration for looking up existing records\n- `upsert`: Configuration for upsert operations (when operation is \"upsert\")\n- And other operation-specific properties as needed","properties":{"lookups":{"type":["string","null"],"description":"A collection of lookup tables or reference data sets that provide standardized, predefined values or options within the application context. These lookup tables enable consistent data entry, validation, and interpretation by defining controlled vocabularies or permissible values for various fields or parameters across the system. They serve as authoritative sources for reference data, ensuring uniformity and reducing errors in data handling and user interactions. Lookup tables typically encompass static or infrequently changing data that supports consistent business logic and user interface behavior across multiple modules or components. By centralizing these reference datasets, the system promotes reusability, simplifies maintenance, and enhances data integrity throughout the application lifecycle.\n\n**Field behavior**\n- Contains multiple named lookup tables, each representing a distinct category or domain of reference data.\n- Used to populate user interface elements such as dropdown menus, autocomplete fields, and selection lists.\n- Supports validation logic by restricting input to predefined permissible values.\n- Typically consists of static or infrequently changing data that underpins consistent data interpretation.\n- May be referenced by multiple components or modules within the application to maintain data consistency.\n- Enables dynamic adaptation of UI and business rules by centralizing reference data management.\n- Facilitates localization by supporting language-specific labels or descriptions where applicable.\n- Often versioned or managed to track changes and ensure backward compatibility.\n\n**Implementation guidance**\n- Structure as a dictionary or map where keys correspond to lookup table names and values are arrays or lists of unique, well-defined entries.\n- Ensure each lookup entry is unique within its table and includes necessary metadata such as codes, descriptions, or display labels.\n- Design for easy updates or extensions to lookup tables without compromising existing data integrity or system stability.\n- Implement caching strategies to optimize performance, especially for client-side consumption.\n- Consider supporting localization and internationalization by including language-specific labels or descriptions where applicable.\n- Incorporate versioning or change management mechanisms to handle updates without disrupting dependent systems.\n- Validate data integrity rigorously during ingestion or updates to prevent duplicates, inconsistencies, or invalid entries.\n- Separate lookup data from business logic to maintain clarity and ease of maintenance.\n- Provide clear documentation for each lookup table to facilitate understanding and correct usage by developers and users.\n- Ensure lookup data is accessible through standardized APIs or interfaces to promote interoperability.\n\n**Examples**\n- A \"countries\" lookup containing standardized country codes (e.g., ISO 3166-1 alpha-2) and their corresponding country names.\n- A \"statusCodes\" lookup defining permissible status values such as \""},"operation":{"type":"string","enum":["insert","update","upsert","upsertpicklistvalues","delete","addupdate"],"description":"Specifies the type of Salesforce operation to perform on records. This determines how data is created, modified, or removed in Salesforce.\n\n**Field behavior**\n- Defines the specific Salesforce action to execute (insert, update, upsert, etc.)\n- Dictates required fields and validation rules for the operation\n- Determines API endpoint and request structure\n- Automatically converted to lowercase before processing\n- Influences response format and error handling\n\n**Operation types**\n\n****insert****\n- **Purpose:** Create new records in Salesforce\n- **Behavior:** Creates new records\n- **Use when:** Creating brand new records\n- **With ignoreExisting: true:** Checks for existing records and skips them (requires idLookup.whereClause)\n- **Without ignoreExisting:** Creates all records without checking for duplicates\n- **Example:** \"Insert new leads from marketing campaign\"\n- **Example with ignoreExisting:** \"Create vendors while ignoring existing vendors\" → operation: \"insert\" + ignoreExisting: true\n\n****update****\n- **Purpose:** Modify existing records in Salesforce\n- **Behavior:** Requires record ID; fails if record doesn't exist\n- **Use when:** Updating known existing records\n- **Example:** \"Update customer status based on order completion\"\n\n****upsert****\n- **Purpose:** Create or update records based on external ID\n- **Behavior:** Updates if external ID matches, creates if not found\n- **Use when:** Syncing data from external systems with external ID field\n- **Example:** \"Upsert customers by External_ID__c field\"\n- **REQUIRES:**\n  - `idLookup.extract` field (maps incoming field to External ID)\n  - `upsert.externalIdField` field (specifies Salesforce External ID field)\n  - Both fields MUST be set for upsert to work\n\n****upsertpicklistvalues****\n- **Purpose:** Create or update picklist values dynamically\n- **Behavior:** Manages picklist value creation/updates\n- **Use when:** Maintaining picklist values from external data\n- **Example:** \"Sync product categories to Salesforce picklist\"\n\n****delete****\n- **Purpose:** Remove records from Salesforce\n- **Behavior:** Requires record ID; records moved to recycle bin\n- **Use when:** Removing obsolete or invalid records\n- **Example:** \"Delete expired opportunities\"\n\n****addupdate****\n- **Purpose:** Create new or update existing records (Celigo-specific)\n- **Behavior:** Similar to upsert but uses different matching logic\n- **Use when:** Upserting without requiring external ID field\n- **Example:** \"Sync contacts, create new or update existing by email\"\n\n**Implementation guidance**\n- Choose operation based on whether records are new, existing, or unknown\n- For **insert**: Ensure records are truly new to avoid errors\n- For **insert with ignoreExisting: true**: MUST set `idLookup.whereClause` to check for existing records\n- For **update/delete**: MUST set `idLookup.whereClause` to identify records\n- For **upsert**: MUST also include the `upsert` object with `externalIdField` property\n- For **upsert**: MUST also set `idLookup.extract` to map incoming field\n- For **addupdate**: MUST set `idLookup.whereClause` to define matching criteria\n- Use **upsertpicklistvalues** only for picklist management scenarios\n\n**Common patterns**\n- **\"Create X while ignoring existing X\"** → operation: \"insert\" + api: \"soap\" + idLookup.whereClause\n- **\"Create or update X\"** → operation: \"upsert\" + api: \"soap\" + upsert.externalIdField + idLookup.extract\n- **\"Update X\"** → operation: \"update\" + api: \"soap\" + idLookup.whereClause\n- **\"Sync X\"** → operation: \"addupdate\" + api: \"soap\" + idLookup.whereClause\n- **ALWAYS include** api field (default to \"soap\" if not specified in prompt)\n\n**Examples**\n\n**Insert new records**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Lead\"\n}\n```\nPrompt: \"Create new leads in Salesforce\"\n\n**Insert with ignore existing**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Vendor__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{VendorName}}'\"\n  }\n}\n```\nPrompt: \"Create vendor records while ignoring existing vendors\"\n\n**NOTE:** For \"ignore existing\" prompts, also remember that `ignoreExisting: true` belongs at a different level (not part of this salesforce configuration).\n\n**Update existing records**\n```json\n{\n  \"operation\": \"update\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Account\",\n  \"idLookup\": {\n    \"whereClause\": \"Id = '{{AccountId}}'\"\n  }\n}\n```\nPrompt: \"Update existing accounts in Salesforce\"\n\n**Upsert by external id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Contact\",\n  \"idLookup\": {\n    \"extract\": \"ExternalId\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nPrompt: \"Create or update contacts by External_ID__c\"\n\n**Addupdate (create or update)**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Customer__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{CustomerName}}'\"\n  }\n}\n```\nPrompt: \"Sync customers, create new or update existing\"\n\n**Important notes**\n- Operation value is case-insensitive (automatically converted to lowercase)\n- Invalid operation values will cause API validation errors\n- Each operation has different requirements for record identification\n- **CRITICAL for upsert**: MUST include the `upsert` object with `externalIdField` property\n- **CRITICAL for upsert**: MUST set `idLookup.extract` to map incoming field to External ID\n- **upsert** requires external ID field to be configured in Salesforce\n- **addupdate** is Celigo-specific and may not work with all Salesforce APIs\n- Operations must align with user's Salesforce permissions\n- Some operations support bulk processing more efficiently than others"},"api":{"type":"string","enum":["soap","rest","metadata","compositerecord"],"description":"**REQUIRED FIELD:** Specifies which Salesforce API to use for the import operation. Different APIs have different capabilities, performance characteristics, and use cases.\n\n**CRITICAL:** This field MUST ALWAYS be set. If no specific API is mentioned in the prompt, **DEFAULT to \"soap\"**.\n\n**Field behavior**\n- **REQUIRED** for all Salesforce imports\n- Determines the protocol and endpoint used for Salesforce communication\n- Influences request format, batch sizes, and response handling\n- Automatically converted to lowercase before processing\n- Affects available features and operation types\n- Impacts performance and error handling behavior\n\n**Api types**\n\n****soap** (Default - Most Common)**\n- **Purpose:** SOAP API with additional features like AllOrNone transaction control\n- **Best for:** Transactional imports requiring rollback capabilities\n- **Features:** AllOrNone header, enhanced transaction control, XML-based\n- **Limits:** Similar to REST but with more overhead\n- **Use when:** Need transaction rollback, legacy integrations\n- **Example:** \"Import invoices with all-or-none transaction guarantee\"\n\n        ****rest****\n- **Purpose:** Standard REST API for single-record and small-batch operations\n- **Best for:** Real-time imports, small datasets (< 200 records/call)\n- **Features:** Full CRUD operations, flexible querying, rich error messages\n- **Limits:** 2,000 records per API call\n- **Use when:** Standard import operations, real-time data sync\n- **Example:** \"Import leads from web form submissions\"\n\n****metadata****\n- **Purpose:** Metadata API for importing configuration and setup data\n- **Best for:** Deploying Salesforce configuration, not data records\n- **Features:** Deploy custom objects, fields, workflows, etc.\n- **Use when:** Deploying Salesforce configuration between orgs\n- **Example:** \"Deploy custom object definitions from non-production to production\"\n- **Note:** NOT for data records - only for metadata\n\n****compositerecord****\n- **Purpose:** Composite Graph API for related record trees\n- **Best for:** Creating multiple related records in one request\n- **Features:** Create parent-child relationships atomically\n- **Use when:** Importing hierarchical data (e.g., Account + Contacts + Opportunities)\n- **Example:** \"Import accounts with their related contacts in one operation\"\n\n**Implementation guidance**\n- **MUST ALWAYS SET THIS FIELD** - Never leave it undefined\n- **Default to \"rest\"** for most import operations (90% of use cases)\n- If prompt doesn't specify API type, use **\"soap\"**\n- Use **\"soap\"** only when AllOrNone transaction control is explicitly required\n- Use **\"metadata\"** only for configuration deployment, never for data\n- Use **\"compositerecord\"** for complex parent-child relationship imports\n- Consider API limits and performance characteristics for your data volume\n- Ensure the Salesforce connection supports the chosen API type\n\n**Default choice**\nWhen in doubt or when no API is mentioned in the prompt → **USE \"soap\"**\n\n**Examples**\n\n**Rest api (most common)**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"rest\",\n  \"sobject\": \"Lead\"\n}\n```\nPrompt: \"Import leads into Salesforce\"\n\n**Soap api with transaction control**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sobject\": \"Invoice__c\",\n  \"soap\": {\n    \"headers\": {\n      \"allOrNone\": true\n    }\n  }\n}\n```\nPrompt: \"Import invoices with all-or-none guarantee\"\n\n**Composite Record api**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"compositerecord\",\n  \"sobject\": \"Account\"\n}\n```\nPrompt: \"Import accounts with related contacts and opportunities\"\n\n**Important notes**\n- API value is case-insensitive (automatically converted to lowercase)\n- **REST API is the default and most commonly used**\n- SOAP API has slightly more overhead but provides transaction guarantees\n- Metadata API is for configuration only, not data records\n- Composite Record API requires careful relationship mapping\n- Each API has different rate limits and performance characteristics\n- Ensure your Salesforce connection and permissions support the chosen API"},"soap":{"type":"object","properties":{"headers":{"type":"object","properties":{"allOrNone":{"type":"boolean","description":"allOrNone specifies whether the Salesforce API operation should be executed atomically, ensuring that either all parts of the operation succeed or none do. When set to true, if any record in a batch operation fails, the entire transaction is rolled back and no changes are committed, preserving data integrity across the dataset. When set to false, the operation allows partial success, meaning some records may be processed and committed even if others fail, which can improve performance but requires careful error handling to manage partial failures. This property is primarily applicable to data manipulation operations such as create, update, delete, and upsert, and is critical for maintaining consistent and reliable data states in multi-record transactions.\n\n**Field behavior**\n- When true, enforces atomicity: all records in the batch must succeed or the entire operation is rolled back with no changes applied.\n- When false, permits partial success: successfully processed records are committed even if some records fail.\n- Applies mainly to batch DML operations including create, update, delete, and upsert.\n- Helps maintain data consistency by preventing partial updates when strict transactional integrity is required.\n- Influences how errors are reported and handled in the API response.\n- Affects transaction commit behavior and rollback mechanisms within the Salesforce platform.\n\n**Implementation guidance**\n- Set to true to guarantee strict data consistency and avoid partial data changes that could lead to inconsistent states.\n- Set to false to allow higher throughput and partial processing when some failures are acceptable or expected.\n- Default value is false if the property is omitted, allowing partial success by default.\n- When false, implement robust error handling to process partial success responses and handle individual record errors appropriately.\n- Verify that the target Salesforce API operation supports the allOrNone header before use.\n- Consider the impact on transaction duration and system resources when enabling atomic operations.\n- Use in scenarios where data integrity is paramount, such as financial or compliance-related data updates.\n\n**Examples**\n- allOrNone = true: Attempting to update 10 records where one record fails validation results in no records being updated, preserving atomicity.\n- allOrNone = false: Attempting to update 10 records where one record fails validation results in 9 records updated successfully and one failure reported.\n- allOrNone = true: Deleting multiple records in a batch will either delete all specified records or none if any deletion fails.\n- allOrNone = false: Creating multiple records where some violate unique constraints results in successful creation of valid records and"}},"description":"Headers to be included in the SOAP request sent to Salesforce, primarily used for authentication, session management, and transmitting additional metadata required by the Salesforce API. These headers form part of the SOAP envelope's header section and are essential for establishing and maintaining a valid session, specifying client options, and enabling various Salesforce API features. They can include standard headers such as SessionHeader for session identification, CallOptions for client-specific settings, and LoginScopeHeader for scoping login requests, as well as custom headers tailored to specific integration needs. Proper configuration of these headers ensures secure and successful communication with Salesforce services, enabling precise control over API interactions and session lifecycle.\n\n**Field behavior**\n- Defines the set of SOAP headers included in every request to the Salesforce API.\n- Facilitates passing of authentication credentials like session IDs or OAuth tokens.\n- Supports inclusion of metadata that controls API behavior or scopes requests.\n- Allows addition of custom headers required by specialized Salesforce integrations.\n- Headers persist across multiple API calls within the same session when set.\n- Modifies request context by influencing how Salesforce processes and authorizes API calls.\n- Enables fine-grained control over API call execution, such as setting query options or client identifiers.\n\n**Implementation guidance**\n- Construct headers as key-value pairs conforming to Salesforce’s SOAP header XML schema.\n- Include mandatory authentication headers such as SessionHeader with valid session IDs.\n- Validate all header values to prevent injection attacks or malformed requests.\n- Use headers to specify client information, API versioning, or organizational context as needed.\n- Ensure header names and structures strictly follow Salesforce SOAP API documentation.\n- Secure sensitive header data to prevent unauthorized access or leakage.\n- Update headers appropriately when session tokens expire or when switching contexts.\n- Test header configurations thoroughly to confirm compatibility with Salesforce API versions.\n- Consider using namespaces and proper XML serialization to maintain SOAP compliance.\n- Handle errors gracefully when headers are missing or invalid to improve debugging.\n\n**Examples**\n- `{ \"SessionHeader\": { \"sessionId\": \"00Dxx0000001gEREAY\" } }` — authenticates the session.\n- `{ \"CallOptions\": { \"client\": \"MyAppClient\" } }` — specifies client application details.\n- `{ \"LoginScopeHeader\": { \"organizationId\": \"00Dxx0000001gEREAY\" } }` — scopes login to a specific organization.\n- `{ \"CustomHeader\": { \"customField\": \"customValue\" } }` — example of a"},"batchSize":{"type":"number","description":"The number of records to be processed in each batch during Salesforce SOAP API operations. This property controls the size of data chunks sent or retrieved in a single API call, optimizing performance and resource utilization by balancing throughput and system resource consumption. Adjusting the batch size directly impacts the number of API calls made, the processing speed, memory usage, and network load, enabling fine-tuning of data operations to suit specific environment constraints and performance goals. Proper configuration of batchSize helps prevent API throttling, reduces the risk of timeouts, and ensures efficient handling of large datasets by segmenting data into manageable portions.\n\n**Field behavior**\n- Determines the count of records included in each batch request to the Salesforce SOAP API.\n- Directly influences the total number of API calls required when handling large datasets.\n- Affects processing throughput, latency, and memory usage during data operations.\n- Can be dynamically adjusted to optimize performance based on system capabilities and network conditions.\n- Impacts error handling and retry mechanisms by controlling batch granularity.\n- Influences network load and the likelihood of encountering API limits or timeouts.\n\n**Implementation guidance**\n- Choose a batch size that adheres to Salesforce API limits and recommended best practices.\n- Balance between larger batch sizes (which reduce API calls but increase memory and processing load) and smaller batch sizes (which increase API calls but reduce resource consumption).\n- Ensure the batchSize value is a positive integer within the allowed range for the specific Salesforce operation.\n- Continuously monitor API response times, error rates, and resource utilization to fine-tune the batch size for optimal performance.\n- Default to Salesforce-recommended batch sizes when uncertain or when starting configuration.\n- Consider network stability and latency when selecting batch size to minimize timeouts and failures.\n- Validate batch size compatibility with other integration components and middleware.\n- Test batch size settings in a staging environment before deploying to production to avoid unexpected failures.\n\n**Examples**\n- `batchSize: 200` — Processes 200 records per batch, a common default for many Salesforce operations.\n- `batchSize: 500` — Processes 500 records per batch, often the maximum allowed for certain Salesforce SOAP API calls.\n- `batchSize: 50` — Processes smaller batches suitable for environments with limited memory or network bandwidth.\n- `batchSize: 100` — A moderate batch size balancing throughput and resource usage in typical scenarios.\n\n**Important notes**\n- Salesforce SOAP API enforces maximum batch size limits (commonly 200"}},"description":"Specifies the comprehensive configuration settings required for integrating with the Salesforce SOAP API. This property includes all essential parameters to establish, authenticate, and manage SOAP-based communication with Salesforce services, enabling a wide range of operations such as creating, updating, deleting, querying, and executing actions on Salesforce objects. It encompasses detailed configurations for endpoint URLs, authentication credentials (such as session IDs or OAuth tokens), SOAP headers, session management details, timeout settings, and error handling mechanisms to ensure reliable, secure, and efficient interactions. The configuration must strictly adhere to Salesforce's WSDL definitions and security standards, supporting robust SOAP API usage within the application environment.\n\n**Field behavior**\n- Defines all connection, authentication, and operational parameters necessary for Salesforce SOAP API interactions.\n- Facilitates the construction and transmission of properly structured SOAP requests and the parsing of SOAP responses.\n- Manages the session lifecycle, including token acquisition, refresh, and timeout handling to maintain persistent connectivity.\n- Supports comprehensive error detection and handling of SOAP faults, API exceptions, and network issues to maintain integration stability.\n- Enables customization of SOAP headers, namespaces, and envelope structures as required by specific Salesforce API calls.\n- Allows configuration of retry policies and backoff strategies in response to transient errors or rate limiting.\n- Supports environment-specific configurations, such as production, non-production, or custom Salesforce domains.\n\n**Implementation guidance**\n- Configure endpoint URLs and authentication credentials accurately based on the Salesforce environment (production, non-production, or custom domains).\n- Validate SOAP envelope structure, namespaces, and compliance with the latest Salesforce WSDL specifications to prevent communication errors.\n- Implement robust error handling to capture, log, and respond appropriately to SOAP faults, API limit exceptions, and network failures.\n- Securely store and transmit sensitive information such as session IDs, OAuth tokens, passwords, and certificates, following best security practices.\n- Monitor Salesforce API usage limits and implement throttling or queuing mechanisms to avoid exceeding quotas and service disruptions.\n- Set timeout values thoughtfully to balance between preventing hanging requests and allowing sufficient time for valid operations to complete.\n- Regularly update the SOAP configuration to align with Salesforce API version changes, WSDL updates, and evolving security requirements.\n- Incorporate logging and monitoring to track SOAP request and response cycles for troubleshooting and performance optimization.\n- Ensure compatibility with Salesforce security protocols, including TLS versions and encryption standards.\n\n**Examples**\n- Specifying the Salesforce SOAP API endpoint URL along with a valid session ID or OAuth token for authentication.\n- Defining custom SOAP headers"},"sObjectType":{"type":"string","description":"sObjectType specifies the exact Salesforce object type that the API operation will target, such as standard objects like Account, Contact, Opportunity, or custom objects like CustomObject__c. This property is essential for defining the schema context of the API request, determining which fields are relevant, and directing the operation to the correct Salesforce data entity. It plays a critical role in data validation, processing logic, and endpoint routing within the Salesforce environment, ensuring that operations such as create, update, delete, or query are executed against the intended object type with precision. Accurate specification of sObjectType enables dynamic adjustment of request payloads and response handling based on the object’s schema and enforces compliance with Salesforce’s API naming conventions and permissions model.  \n**Field behavior:**  \n- Identifies the specific Salesforce object (sObject) type targeted by the API operation.  \n- Determines the applicable schema, field availability, and validation rules based on the selected object.  \n- Must exactly match a valid Salesforce object API name recognized by the connected Salesforce instance, including case sensitivity.  \n- Routes the API request to the appropriate Salesforce object endpoint for the intended operation.  \n- Influences dynamic field mapping, data transformation, and response parsing processes.  \n- Supports both standard and custom Salesforce objects, with custom objects requiring the “__c” suffix.  \n**Implementation guidance:**  \n- Validate sObjectType values against the current Salesforce object metadata in the target environment to prevent invalid requests.  \n- Enforce exact case-sensitive matching with Salesforce API object names (e.g., \"Account\" not \"account\").  \n- Support and recognize both standard objects and custom objects, ensuring custom objects include the “__c” suffix.  \n- Implement robust error handling for invalid, unsupported, or deprecated sObjectType values to provide clear feedback.  \n- Regularly update the list of allowed sObjectType values to reflect changes in the Salesforce schema or environment.  \n- Use sObjectType to dynamically tailor API request payloads, field selections, and response parsing logic.  \n- Consider object-level permissions and authentication scopes when processing requests involving specific sObjectTypes.  \n**Examples:**  \n- \"Account\"  \n- \"Contact\"  \n- \"Opportunity\"  \n- \"Lead\"  \n- \"CustomObject__c\"  \n- \"Case\"  \n- \"Campaign\"  \n**Important notes:**  \n- Custom Salesforce objects must be referenced using their exact API names, including the “__c” suffix.  \n-"},"idLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Please specify which field from your export data should map to External ID. It is very important to pick a field that is both unique, and guaranteed to always have a value when exported (otherwise the Salesforce Upsert operation will fail). If sample data is available then a select list should be displayed here to help you find the right field from your export data. If sample data is not available then you will need to manually type the field id that should map to External ID.\n\n**Field behavior**\n- Specifies which field from the incoming data contains the External ID value\n- **Used for upsert operations ONLY**\n- Maps the incoming data field to the Salesforce External ID field specified in `upsert.externalIdField`\n- Must reference a field that exists in the incoming data payload\n- The field value must be unique and always populated to ensure reliable matching\n- Generates field mapping at runtime to match incoming records with Salesforce records\n\n**When to use**\n- **upsert operations ONLY** — REQUIRED when operation is \"upsert\"\n- Works together with `upsert.externalIdField` to enable External ID matching\n\n**When not to use**\n- **update** — use `whereClause` instead\n- **delete** — use `whereClause` instead\n- **addupdate** — use `whereClause` instead\n- **insert with ignoreExisting** — use `whereClause` instead\n\n**Implementation guidance**\n- Choose a field from incoming data that contains unique, non-null values\n- Ensure the field is always populated in the incoming records to prevent operation failures\n- This field's value will be matched against the Salesforce External ID field specified in `upsert.externalIdField`\n- Use the exact field name as it appears in the incoming data (case-sensitive)\n- If sample data is available, validate that the field exists and has appropriate values\n- Test with sample data to ensure matching works correctly before production deployment\n\n**Example**\n\n**Upsert by External id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nMaps incoming field \"CustomerNumber\" → Salesforce field \"External_ID__c\"\n\n**How it works:**\n- Incoming record has `{\"CustomerNumber\": \"CUST-12345\", \"Name\": \"John\"}`\n- System extracts \"CUST-12345\" from CustomerNumber\n- Searches Salesforce External_ID__c field for \"CUST-12345\"\n- If found: updates the existing record\n- If not found: creates a new record with External_ID__c = \"CUST-12345\"\n\n**Common field names**\n- \"ExternalId\" — incoming field containing external identifier\n- \"CustomerId\" — incoming field with unique customer number\n- \"AccountNumber\" — incoming field with account number\n- \"SourceSystemId\" — incoming field from source system\n\n**Important notes**\n- **ONLY use for upsert operations**\n- For all other operations (update, delete, addupdate, insert with ignoreExisting), use `whereClause` instead\n- Field must exist in incoming data and be consistently populated\n- Works with `upsert.externalIdField` to enable External ID matching\n- Missing or null values will cause the upsert operation to fail\n- Field name is case-sensitive and must match exactly\n- If both `extract` and `whereClause` are set, `extract` takes precedence for upsert"},"whereClause":{"type":"string","description":"To determine if a record exists in Salesforce, define a SOQL query WHERE clause for the sObject type selected above. For example, if you are importing contact records with a unique email, your WHERE clause should identify contacts with the same email.\n\nHandlebars statements are allowed, and more complex WHERE clauses can contain AND and OR logic. For example, when the product name alone is not unique, you can define a WHERE clause to find any records where the product name exists AND the product also belongs to a specific product family, resulting in a unique product record that matches both criteria.\n\nTo find string values that contain spaces, wrap them in single quotes, for example:\n\nName = 'John Smith'\n\n**Field behavior**\n- Specifies a SOQL conditional expression to match existing records in Salesforce\n- Omit the \"WHERE\" keyword - only provide the condition expression\n- Supports Handlebars {{fieldName}} syntax to reference incoming data fields\n- Supports SOQL operators: =, !=, >, <, >=, <=, LIKE, IN, NOT IN\n- Supports logical operators: AND, OR, NOT\n- Can use parentheses for grouping complex conditions\n- The clause is evaluated for each incoming record to find matching Salesforce records\n\n**When to use (REQUIRED FOR)**\n- **update** — REQUIRED to identify which records to update\n- **delete** — REQUIRED to identify which records to delete\n- **addupdate** — REQUIRED to identify existing records (update if found, insert if not)\n- **insert with ignoreExisting: true** — REQUIRED to check if records exist (skip if found)\n\n**When not to use**\n- **upsert** — DO NOT use; use `extract` instead\n- **insert without ignoreExisting** — Not needed for simple inserts\n\n**Implementation guidance**\n- Write the clause following SOQL syntax rules, omitting the \"WHERE\" keyword\n- Use {{fieldName}} syntax to reference values from incoming records\n- Wrap string values containing spaces in single quotes: `Name = '{{FullName}}'`\n- Numeric and boolean values don't need quotes: `Age = {{Age}}`\n- Use indexed fields (Email, External ID) for better performance\n- Keep the clause selective to minimize records returned\n- Validate syntax before deployment to prevent runtime errors\n\n**Examples**\n\n**Update by Email**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nFinds records with matching Email for update\n\n**Insert with ignore existing by Email**\n```json\n{\n  \"operation\": \"insert\",\n  \"ignoreExisting\": true,\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nChecks if a record with matching Email exists; if found, skips creation\n\n**Addupdate (create or update) by Email**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nUpdates if Email matches, creates new record if not found\n\n**Delete by Account Number**\n```json\n{\n  \"operation\": \"delete\",\n  \"idLookup\": {\n    \"whereClause\": \"AccountNumber = '{{AcctNum}}'\"\n  }\n}\n```\nFinds records with matching AccountNumber for deletion\n\n**Complex and logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{ProductName}}' AND ProductFamily__c = '{{Family}}'\"\n  }\n}\n```\nMatches records where both Name AND ProductFamily match\n\n**Complex or logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}' OR Phone = '{{Phone}}'\"\n  }\n}\n```\nMatches records where Email OR Phone matches\n\n**String with spaces**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{FullName}}'\"\n  }\n}\n```\nNote: Single quotes around {{FullName}} handle names with spaces like \"John Smith\"\n\n**Important notes**\n- **REQUIRED** for update, delete, addupdate, and insert (with ignoreExisting)\n- DO NOT use for upsert operations - use `extract` instead\n- Use {{fieldName}} to reference incoming data fields\n- String values with spaces must be wrapped in single quotes\n- Numeric and boolean values don't need quotes\n- Poor clause design can impact performance and hit governor limits\n- Always test in a non-production environment before production deployment"}},"description":"Configuration for looking up existing Salesforce records. **This object should ONLY contain either `extract` OR `whereClause` - NOT both, and NOT operation or other properties.**\n\n**What this object contains**\nThis object should contain ONLY ONE of these properties:\n- `extract` — for upsert operations ONLY\n- `whereClause` — for update, delete, addupdate, and insert (with ignoreExisting)\n\n**DO NOT include operation, upsert, or any other properties here** - those belong at the parent salesforce level.\n\n**Field behavior**\n- Contains two strategies for finding existing records: `extract` and `whereClause`\n- **`extract`**: Used ONLY for upsert operations\n- **`whereClause`**: Used for update, delete, addupdate, and insert (with ignoreExisting)\n- Generates field mapping or SOQL queries at runtime to match records\n- Critical for preventing duplicate record creation and enabling record updates\n\n**When to use each field**\n\n**Use `extract` (upsert ONLY)**\n- **upsert** — REQUIRED: Maps incoming field to External ID field\n\n**Use `whereClause` (all other operations)**\n- **update** — REQUIRED: Identifies records to update via SOQL query\n- **delete** — REQUIRED: Identifies records to delete via SOQL query\n- **addupdate** — REQUIRED: Identifies existing records via SOQL query\n- **insert with ignoreExisting: true** — REQUIRED: Checks if records exist via SOQL query\n\n**Leave empty**\n- **insert without ignoreExisting** — No lookup needed for simple inserts\n\n**What to set (idLookup object content)**\n\n**For upsert**\n```json\n{\n  \"extract\": \"CustomerNumber\"\n}\n```\n\n**For update/delete/addupdate**\n```json\n{\n  \"whereClause\": \"Email = '{{Email}}'\"\n}\n```\n\n**Parent context examples (for reference)**\nThese show the complete parent-level structure. **idLookup itself should only contain extract or whereClause.**\n\n**Upsert (parent context)**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\n\n**Update (parent context)**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\n\n**Implementation guidance**\n- Use `extract` ONLY for upsert operations (with `upsert.externalIdField`)\n- Use `whereClause` for update, delete, addupdate, and insert with ignoreExisting\n- Do not set both `extract` and `whereClause` for the same operation\n- **ONLY set extract or whereClause - do not nest operation or other properties**\n- Ensure the lookup strategy matches the operation type\n- Test thoroughly in a non-production environment before production deployment\n\n**Important notes**\n- **This object ONLY contains `extract` or `whereClause`**\n- **`extract`** is for upsert ONLY - maps to External ID field\n- **`whereClause`** is for all other operations requiring lookups\n- Do NOT nest `operation`, `upsert`, or other properties here - those belong at the parent salesforce level\n- For upsert: `extract` + `upsert.externalIdField` work together (both at salesforce level)\n- For other operations: `whereClause` enables SOQL-based record matching\n- Incorrect configuration can cause duplicates or failed operations"},"upsert":{"type":"object","properties":{"externalIdField":{"type":"string","description":"Specifies the API name of the External ID field in Salesforce that will be used to match records during an upsert operation. This is the TARGET field in Salesforce, while `idLookup.extract` specifies the SOURCE field from incoming data.\n\n**Field behavior**\n- Identifies which Salesforce field serves as the external ID for record matching\n- Must be a field marked as \"External ID\" in Salesforce object configuration\n- Works together with `idLookup.extract` to enable upsert matching\n- The field must be indexed and unique in Salesforce\n- Supports text, number, and email field types configured as external IDs\n- Required when `operation: \"upsert\"`\n\n**How it works with idLookup.extract**\n- **`idLookup.extract`**: Specifies which incoming data field contains the matching value (e.g., \"CustomerNumber\")\n- **`externalIdField`**: Specifies which Salesforce field to match against (e.g., \"External_ID__c\")\n- Together they create the match: incoming \"CustomerNumber\" → Salesforce \"External_ID__c\"\n\n**Implementation guidance**\n- Use the exact Salesforce API name of the field (case-sensitive)\n- Verify the field is marked as \"External ID\" in Salesforce object settings\n- Ensure the field is indexed for optimal performance\n- Confirm the integration user has read/write access to this field\n- The field must have unique values to prevent ambiguous matches\n- Test in a non-production environment before production to verify configuration\n\n**Examples**\n\n**Standard External id field**\n```\n\"AccountNumber\"\n```\nStandard Account field marked as External ID\n\n**Custom External id field**\n```\n\"Custom_External_Id__c\"\n```\nCustom field marked as External ID\n\n**Email as External id**\n```\n\"Email\"\n```\nEmail field configured as External ID on Contact\n\n**NetSuite id as External id**\n```\n\"netsuite_conn__NetSuite_Id__c\"\n```\nNetSuite connector External ID field\n\n**How it works (Parent Context)**\nAt the parent `salesforce` level, the complete configuration looks like:\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nWhere:\n- `operation: \"upsert\"` — at salesforce level\n- `idLookup.extract: \"CustomerNumber\"` — incoming field, at salesforce level\n- `upsert.externalIdField: \"External_ID__c\"` — **THIS field**, inside upsert object\n\n**Common external id fields**\n- \"AccountNumber\" — standard Account field\n- \"Email\" — when configured as External ID on Contact\n- \"Custom_External_Id__c\" — custom field on any object\n- \"External_ID__c\" — common naming convention\n- \"Source_System_Id__c\" — for multi-system integrations\n\n**Important notes**\n- Field must be explicitly marked as \"External ID\" in Salesforce\n- Not all fields can be External IDs (must be text, number, or email type)\n- Missing or null values in this field will cause upsert to fail\n- If multiple records match the External ID, upsert will fail\n- Always use with `idLookup.extract` for upsert operations"}},"description":"Configuration for Salesforce upsert operations. This object contains ONLY the `externalIdField` property.\n\n**CRITICAL: This component is REQUIRED when `operation: \"upsert\"`. Always include this component for upsert operations.**\n\n**Field behavior**\n- **REQUIRED** when `operation: \"upsert\"`\n- Contains configuration specific to upsert behavior\n- **ONLY contains ONE property: `externalIdField`**\n- Works in conjunction with `idLookup.extract` (which is at the SAME level as this upsert object, NOT nested within it)\n\n**When to use**\n- **REQUIRED** when `operation` is \"upsert\" - MUST be included\n- Leave undefined for insert, update, delete, or addupdate operations\n- Without this object, upsert operations will fail\n\n**Why IT'S required**\n- Upsert operations need to know which Salesforce field to use for matching\n- The `externalIdField` property specifies the Salesforce External ID field\n- This works with `idLookup.extract` to enable record matching\n- Missing this object will cause the import to fail\n\n**Decision**\n**Return true/include this component if `operation: \"upsert\"` is set. This is REQUIRED for all upsert operations.**\n\n**What to set**\n**This object should ONLY contain:**\n```json\n{\n  \"externalIdField\": \"External_ID__c\"\n}\n```\n\n**DO NOT include operation, idLookup, or any other properties here** - those belong at the parent salesforce level.\n\n**Parent context (for reference only)**\nAt the parent `salesforce` level, you will also set:\n- `operation: \"upsert\"` (at salesforce level)\n- `idLookup.extract` (at salesforce level)\n- `upsert.externalIdField` (THIS object)\n\n**Implementation guidance**\n- Always set `externalIdField` when using upsert\n- Ensure the specified External ID field exists in Salesforce\n- Verify the field is marked as \"External ID\" in Salesforce\n- **ONLY set the externalIdField property - nothing else**\n\n**Important notes**\n- **This object ONLY contains `externalIdField`**\n- Do NOT nest `operation`, `idLookup`, or other properties here\n- Those properties belong at the parent salesforce level, not nested in this upsert object\n- May be extended with additional upsert-specific options in the future\n- Only relevant when operation is \"upsert\""},"upsertpicklistvalues":{"type":"object","properties":{"type":{"type":"string","enum":["picklist","multipicklist"],"description":"Specifies the type of picklist field being managed. Salesforce supports single-select picklists and multi-select picklists.\n\n**Field behavior**\n- Determines whether the picklist allows single or multiple value selection\n- Affects UI rendering and validation in Salesforce\n- Automatically converted to lowercase before processing\n\n**Picklist types**\n\n****picklist****\n- Standard single-select picklist\n- User can select only one value at a time\n- Most common picklist type\n\n****multipicklist****\n- Multi-select picklist\n- User can select multiple values (separated by semicolons)\n- Use for categories, tags, or multi-value selections\n\n**Examples**\n- \"picklist\" — for Status, Priority, Category fields\n- \"multipicklist\" — for Tags, Skills, Interests fields"},"fullName":{"type":"string","description":"The API name of the picklist field in Salesforce, including the object name.\n\n**Field behavior**\n- Fully qualified field name: ObjectName.FieldName__c\n- Must match exact Salesforce API name\n- Case-sensitive\n- Required for picklist value upsert operations\n\n**Examples**\n- \"Account.MyPicklist__c\"\n- \"Contact.Status__c\"\n- \"CustomObject__c.Category__c\""},"label":{"type":"string","description":"The display label for the picklist field in Salesforce UI.\n\n**Field behavior**\n- Human-readable label shown in Salesforce interface\n- Can contain spaces and special characters\n- Does not need to match API name\n\n**Examples**\n- \"My Picklist\"\n- \"Product Category\"\n- \"Customer Status\""},"visibleLines":{"type":"number","description":"For multi-select picklists, specifies how many lines are visible in the UI without scrolling.\n\n**Field behavior**\n- Only applies to multipicklist fields\n- Controls height of the picklist selection box\n- Default is typically 4-5 lines\n\n**Examples**\n- 3 — shows 3 values at once\n- 5 — shows 5 values at once\n- 10 — shows more values for large lists"}},"description":"Configuration for managing Salesforce picklist field metadata when using the `upsertpicklistvalues` operation. This enables dynamic creation or updates to picklist field definitions.\n\n**Field behavior**\n- Used ONLY when `operation: \"upsertpicklistvalues\"`\n- Manages picklist field metadata, not picklist values themselves\n- Enables programmatic picklist field management\n- Requires appropriate Salesforce metadata permissions\n\n**When to use**\n- When operation is set to \"upsertpicklistvalues\"\n- For managing picklist field definitions programmatically\n- When syncing picklist configurations between orgs\n- For automated picklist field deployment\n\n**Important notes**\n- This manages the FIELD itself, not the individual picklist VALUES\n- Requires Salesforce API and Metadata API permissions\n- Should only be set when operation is \"upsertpicklistvalues\"\n- Most imports won't use this - it's for metadata management"},"removeNonSubmittableFields":{"type":"boolean","description":"Indicates whether fields that are not eligible for submission to Salesforce—such as read-only, system-generated, formula, or otherwise restricted fields—should be automatically excluded from the data payload before it is sent. This feature helps prevent submission errors by ensuring that only valid, submittable fields are included in the API request, thereby improving data integrity and reducing the likelihood of operation failures. It operates exclusively on the outgoing submission payload without modifying the original source data, making it a safe and effective way to sanitize data before interaction with Salesforce APIs.\n\n**Field behavior**\n- When enabled (set to true), the system analyzes the outgoing data payload and removes any fields identified as non-submittable based on Salesforce metadata, field-level security, and API constraints.\n- When disabled (set to false) or omitted, all fields present in the payload are submitted as-is, which may lead to errors if restricted, read-only, or system-managed fields are included.\n- Enhances data integrity by proactively filtering out fields that could cause rejection or failure during Salesforce data operations.\n- Applies only to the outgoing submission payload, ensuring the original source data remains unchanged and intact.\n- Dynamically adapts to different Salesforce objects, API versions, and user permissions by referencing up-to-date metadata and security settings.\n\n**Implementation guidance**\n- Enable this flag in integration scenarios where the data schema may include fields that Salesforce does not accept for the target object or API version.\n- Maintain an up-to-date cache or real-time access to Salesforce metadata to accurately reflect field-level permissions, submittability, and API changes.\n- Implement logging or reporting mechanisms to track which fields are removed, aiding in auditing, troubleshooting, and transparency.\n- Use in conjunction with other data validation, transformation, and permission-checking steps to ensure clean, compliant data submissions.\n- Thoroughly test in development and staging environments to understand the impact on data flows, error handling, and downstream processes.\n- Consider user roles and permissions, as submittability may vary depending on the authenticated user's access rights and profile settings.\n\n**Examples**\n- `removeNonSubmittableFields: true` — Automatically removes read-only or system-managed fields such as CreatedDate, LastModifiedById, or formula fields before submission.\n- `removeNonSubmittableFields: false` — Submits all fields, potentially causing errors if restricted or read-only fields are included.\n- Omitting the property defaults to false, meaning no automatic removal of non-submittable"},"document":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the document within the Salesforce system, serving as the primary key that distinctly identifies each document record. This identifier is immutable once assigned and must not be altered or reused to maintain data integrity. It is essential for referencing the document in API calls, queries, integrations, and any operations involving the document. The ID is a base-62 encoded string that can be either 15 characters long, which is case-sensitive, or 18 characters long, which includes a case-insensitive checksum to enhance reliability in external systems. Proper handling of this ID ensures accurate linkage and manipulation of document records across various Salesforce processes and integrations.  \n**Field behavior:**  \n- Acts as the primary key uniquely identifying a document record within Salesforce.  \n- Immutable after creation; should never be changed or reassigned.  \n- Used consistently to reference the document across API calls, queries, and integrations.  \n- Case-sensitive and may vary in length (15 or 18 characters).  \n- Serves as a critical reference point for establishing relationships with other records.  \n**Implementation guidance:**  \n- Must be generated exclusively by Salesforce or the managing system to guarantee uniqueness.  \n- Should be handled as a string to accommodate Salesforce’s ID formats.  \n- Validate incoming IDs against Salesforce ID format standards to ensure correctness.  \n- Avoid manual generation or modification of IDs to prevent data inconsistencies.  \n- Use the 18-character format when possible to leverage the checksum for case-insensitive environments.  \n**Examples:**  \n- \"0151t00000ABCDE\"  \n- \"0151t00000ABCDEAAA\"  \n**Important notes:**  \n- IDs are case-sensitive; 15-character IDs are case-sensitive, while 18-character IDs include a case-insensitive checksum.  \n- Do not manually create or alter IDs; always use Salesforce-generated values.  \n- This ID is critical for performing document updates, deletions, retrievals, and establishing relationships.  \n- Using the correct ID format is essential for API compatibility and data integrity.  \n**Dependency chain:**  \n- Required for all operations on salesforce.document entities, including CRUD actions.  \n- Other fields and processes may reference this ID to link related records or trigger actions.  \n- Integral to workflows, triggers, and integrations that depend on document identification.  \n**Technical** DETAILS:  \n- Salesforce record IDs are base-62 encoded strings.  \n- IDs can be 15 characters (case-sensitive) or 18"},"name":{"type":"string","description":"The name of the document as stored in Salesforce, serving as a unique and human-readable identifier within the Salesforce environment. This string value represents the document's title or label, enabling users and systems to easily recognize, retrieve, and manage the document across user interfaces, search functionalities, and API interactions. It is a critical attribute used to reference the document in various operations such as creation, updates, searches, display contexts, and integration workflows, ensuring clear identification and organization within folders or libraries. The name plays a pivotal role in document management by facilitating sorting, filtering, and categorization, and it must adhere to Salesforce’s naming conventions and constraints to maintain system integrity and usability.\n\n**Field behavior**\n- Acts as the primary human-readable identifier for the document.\n- Used in UI elements, search queries, and API calls to locate or reference the document.\n- Typically mandatory when creating or updating document records.\n- Should be unique within the scope of the relevant Salesforce object, folder, or library to avoid ambiguity.\n- Influences sorting, filtering, and organization of documents within lists or folders.\n- Changes to the name may trigger updates or validations in dependent systems or references.\n- Supports international characters to accommodate global users.\n\n**Implementation guidance**\n- Choose descriptive yet concise names to enhance clarity, usability, and discoverability.\n- Validate input to exclude unsupported special characters and ensure compliance with Salesforce naming rules.\n- Enforce uniqueness constraints within the relevant context to prevent conflicts and confusion.\n- Consider the impact of renaming documents on existing references, hyperlinks, integrations, and audit trails.\n- Account for potential length restrictions and implement truncation or user prompts as necessary.\n- Support internationalization by allowing UTF-8 encoded characters where appropriate.\n- Implement validation to prevent use of reserved words or prohibited characters.\n\n**Examples**\n- \"Quarterly Sales Report Q1 2024\"\n- \"Employee Handbook\"\n- \"Project Plan - Alpha Release\"\n- \"Marketing Strategy Overview\"\n- \"Client Contract - Acme Corp\"\n\n**Important notes**\n- The maximum length of the name may be limited (commonly up to 255 characters), depending on Salesforce configuration.\n- Renaming a document can affect external references, hyperlinks, or integrations relying on the original name.\n- Case sensitivity in name comparisons may vary based on Salesforce settings and should be considered when implementing logic.\n- Special characters and whitespace handling should align with Salesforce’s accepted standards to avoid errors.\n- Avoid using reserved words or prohibited characters to ensure compatibility and"},"folderId":{"type":"string","description":"The unique identifier of the folder within Salesforce where the document is stored. This ID is essential for organizing, categorizing, and managing documents efficiently within the Salesforce environment. It enables precise association of documents to specific folders, facilitating streamlined retrieval, access control, and hierarchical organization of document assets. By linking a document to a folder, it inherits the folder’s permissions and visibility settings, ensuring consistent access management and supporting structured document workflows. Proper use of this identifier allows for effective filtering, querying, and management of documents based on their folder location, which is critical for maintaining an organized and secure document repository.\n\n**Field behavior**\n- Identifies the exact folder containing the document.\n- Associates the document with a designated folder for structured organization.\n- Must reference a valid, existing folder ID within Salesforce.\n- Typically represented as a string conforming to Salesforce ID formats (15- or 18-character).\n- Influences document visibility and access based on folder permissions.\n- Changing the folderId moves the document to a different folder, affecting its context and access.\n- Inherits folder-level sharing and security settings automatically.\n\n**Implementation guidance**\n- Verify that the folderId exists and is accessible in the Salesforce instance before assignment.\n- Validate the folderId format to ensure it matches Salesforce’s 15- or 18-character ID standards.\n- Use this field to filter, query, or categorize documents by their folder location in API operations.\n- When creating or updating documents, setting this field assigns or moves the document to the specified folder.\n- Handle permission checks to ensure the user or integration has rights to assign or access the folder.\n- Consider the impact on sharing rules, workflows, and automation triggered by folder changes.\n- Ensure case sensitivity is preserved when handling folder IDs.\n\n**Examples**\n- \"00l1t000003XyzA\" (example of a Salesforce folder ID)\n- \"00l5g000004AbcD\"\n- \"00l9m00000FghIj\"\n\n**Important notes**\n- The folderId must be valid and the folder must be accessible by the user or integration performing the operation.\n- Using an incorrect or non-existent folderId will cause errors or result in improper document categorization.\n- Folder-level permissions in Salesforce can restrict the ability to assign documents or access folder contents.\n- Changes to folderId may affect document sharing, visibility settings, and trigger related workflows.\n- Folder IDs are case-sensitive and must be handled accordingly.\n- Moving documents between folders"},"contentType":{"type":"string","description":"The MIME type of the document, specifying the exact format of the file content stored within the Salesforce document. This field enables systems and applications to accurately identify, process, render, or handle the document by indicating its media type, such as PDF, image, audio, or text formats. It adheres to standard MIME type conventions (type/subtype) and may include additional parameters like character encoding when necessary. Properly setting this field ensures seamless integration, correct display, appropriate application usage, and reliable content negotiation across diverse platforms and services. It plays a critical role in workflows, security scanning, compliance validation, and API interactions by guiding how the document is stored, transmitted, and interpreted.\n\n**Field behavior**\n- Defines the media type of the document content to guide processing, rendering, and handling.\n- Identifies the specific file format (e.g., PDF, JPEG, DOCX) for compatibility and validation purposes.\n- Facilitates content negotiation and appropriate response handling between systems and applications.\n- Typically formatted as a standard MIME type string (type/subtype), optionally including parameters such as charset.\n- Influences document handling in workflows, security scans, user interfaces, and API interactions.\n- Serves as a key attribute for determining how the document is stored, transmitted, and displayed.\n\n**Implementation guidance**\n- Always use valid MIME types conforming to official standards (e.g., \"application/pdf\", \"image/png\") to ensure broad compatibility.\n- Verify that the contentType accurately reflects the actual file content to prevent processing errors or misinterpretation.\n- Update the contentType promptly if the document format changes to maintain consistency and reliability.\n- Utilize this field to enable correct content handling in APIs, integrations, client applications, and automated workflows.\n- Include relevant parameters (such as charset for text-based documents) when applicable to provide additional context.\n- Consider validating the contentType against the file extension and content during upload or processing to enhance data integrity.\n\n**Examples**\n- \"application/pdf\" for PDF documents.\n- \"image/jpeg\" for JPEG image files.\n- \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" for Microsoft Word DOCX files.\n- \"text/plain; charset=utf-8\" for plain text files with UTF-8 encoding.\n- \"application/json\" for JSON formatted documents.\n- \"audio/mpeg\" for MP3 audio files.\n\n**Important notes**\n- Incorrect or mismatched contentType values can cause improper document rendering, processing failures,"},"developerName":{"type":"string","description":"developerName is a unique, developer-friendly identifier assigned to the document within the Salesforce environment. It acts as a stable and consistent reference used primarily in programmatic contexts such as Apex code, API calls, metadata configurations, deployment scripts, and integration workflows. This identifier ensures reliable access, manipulation, and management of the document across various Salesforce components and external systems. The value should be concise, descriptive, and strictly adhere to Salesforce naming conventions to prevent conflicts and ensure maintainability. Typically, it employs camelCase or underscores, excludes spaces and special characters, and is designed to remain unchanged over time to avoid breaking dependencies or integrations.\n\n**Field behavior**\n- Serves as a unique and immutable identifier for the document within the Salesforce org.\n- Widely used in code, APIs, metadata, deployment processes, and integrations to reference the document reliably.\n- Should remain stable and not be altered frequently to maintain integration and system stability.\n- Enforces naming conventions that disallow spaces and special characters, favoring camelCase or underscores.\n- Case-insensitive in many contexts but consistent casing is recommended for clarity and maintainability.\n- Acts as a key reference point in metadata relationships and dependency mappings.\n\n**Implementation guidance**\n- Ensure the developerName is unique within the Salesforce environment to prevent naming collisions.\n- Follow Salesforce naming rules: start with a letter, use only alphanumeric characters and underscores.\n- Avoid reserved keywords, spaces, special characters, and punctuation marks.\n- Keep the identifier concise yet descriptive enough to clearly convey the document’s purpose or function.\n- Validate the developerName against Salesforce’s character set, length restrictions, and naming policies before deployment.\n- Plan for the developerName to remain stable post-deployment to avoid breaking references and integrations.\n- Use consistent casing conventions (camelCase or underscores) to improve readability and reduce errors.\n- Incorporate meaningful terms that reflect the document’s role or content for easier identification.\n\n**Examples**\n- \"InvoiceTemplate\"\n- \"CustomerAgreementDoc\"\n- \"SalesReport2024\"\n- \"ProductCatalog_v2\"\n- \"EmployeeOnboardingGuide\"\n\n**Important notes**\n- Changing the developerName after deployment can disrupt integrations, metadata references, and dependent components.\n- It is distinct from the document’s display name, which can include spaces and be more user-friendly.\n- While Salesforce treats developerName as case-insensitive in many scenarios, maintaining consistent casing improves readability and maintenance.\n- Commonly referenced in metadata APIs, deployment tools, automation scripts, and"},"isInternalUseOnly":{"type":"boolean","description":"isInternalUseOnly indicates whether the document is intended exclusively for internal use within the organization and must not be shared with external parties. This flag is critical for protecting sensitive, proprietary, or confidential information by restricting access strictly to authorized internal users. It serves as a definitive marker within the document’s metadata to enforce internal visibility policies, prevent accidental or unauthorized external distribution, and support compliance with organizational security standards. Proper handling of this flag ensures that internal communications, strategic plans, financial data, and other sensitive materials remain protected from exposure outside the company.\n\n**Field behavior**\n- When set to true, the document is strictly restricted to internal users and must not be accessible or shared with any external entities.\n- When false or omitted, the document may be accessible to external users, subject to other access control mechanisms.\n- Commonly used to flag documents containing sensitive business strategies, internal communications, proprietary data, or confidential policies.\n- Influences user interface elements and API responses to hide, restrict, or clearly label document visibility accordingly.\n- Changes to this flag can trigger notifications, workflow actions, or audit events to ensure proper handling and compliance.\n- May affect document indexing, search results, and export capabilities to prevent unauthorized dissemination.\n\n**Implementation guidance**\n- Always verify this flag before granting document access in both user interfaces and backend APIs to ensure compliance with internal use policies.\n- Use in conjunction with role-based access controls, group memberships, document classification labels, and data sensitivity tags to enforce comprehensive security.\n- Carefully manage updates to this flag to prevent inadvertent exposure of confidential information; consider implementing approval workflows and multi-level reviews for changes.\n- Implement audit logging for all changes to this flag to support compliance, traceability, and forensic investigations.\n- Ensure all integrated systems, third-party applications, and data export mechanisms respect this flag to maintain consistent access restrictions.\n- Provide clear UI indicators, warnings, or access restrictions to users when accessing documents marked as internal use only.\n- Regularly review and validate the accuracy of this flag to maintain up-to-date security postures.\n\n**Examples**\n- true: A confidential internal report on upcoming product launches accessible only to employees.\n- false: A publicly available user manual or product datasheet intended for customers and partners.\n- true: Internal HR policies and procedures documents restricted to company staff.\n- false: Press releases or marketing materials distributed externally.\n- true: Financial forecasts and budget plans shared exclusively within the finance department.\n- true: Internal audit findings and compliance reports"},"isPublic":{"type":"boolean","description":"Indicates whether the document is publicly accessible to all users or restricted exclusively to authorized users within the system. This property governs the document's visibility and access permissions, determining if authentication is required to view its content. When set to true, the document becomes openly available without any access controls, allowing anyone—including unauthenticated users—to access it freely. Conversely, setting this flag to false enforces security measures that limit access based on user roles, permissions, and organizational sharing rules, ensuring that only authorized personnel can view the document. This setting directly impacts how the document appears in search results, sharing interfaces, and external indexing services, making it a critical factor in managing data privacy, compliance, and information governance.\n\n**Field behavior**\n- When true, the document is accessible by anyone, including unauthenticated users.\n- When false, access is restricted to users with explicit permissions or roles.\n- Directly influences the document’s visibility in search results, sharing interfaces, and external indexing.\n- Changes to this property take immediate effect, altering who can view the document.\n- Modifications may trigger updates in access control enforcement mechanisms.\n\n**Implementation guidance**\n- Use this property to clearly define and enforce document sharing and visibility policies.\n- Always set to false for sensitive, confidential, or proprietary documents to prevent unauthorized access.\n- Implement strict validation to ensure only users with appropriate administrative rights can modify this property.\n- Consider organizational compliance requirements, privacy regulations, and data security standards before making documents public.\n- Regularly monitor and audit changes to this property to maintain security oversight and compliance.\n- Plan changes carefully, as toggling this setting can have immediate and broad impact on document accessibility.\n\n**Examples**\n- true: A publicly available product catalog, marketing brochure, or press release intended for external audiences.\n- false: Internal project plans, financial reports, HR documents, or any content restricted to company personnel.\n\n**Important notes**\n- Making a document public may expose it to indexing by search engines if accessible via public URLs.\n- Changes to this property have immediate effect on document accessibility; ensure appropriate change management.\n- Ensure alignment with corporate governance, legal requirements, and data protection policies when toggling public access.\n- Public documents should be reviewed periodically to confirm that their visibility remains appropriate.\n- Unauthorized changes to this property can lead to data leaks or compliance violations.\n\n**Dependency chain**\n- Access control depends on user roles, profiles, and permission sets configured within the system.\n- Interacts"}},"description":"The 'document' property represents a digital file or record associated with a Salesforce entity, encompassing a wide range of file types such as attachments, reports, images, contracts, spreadsheets, and other files stored within the Salesforce platform. It acts as a comprehensive container that includes both the file's binary content—typically encoded in base64—and its associated metadata, such as file name, description, MIME type, and tags. This property enables seamless integration, retrieval, upload, update, and management of files within Salesforce-related workflows and processes. By linking relevant documents directly to Salesforce records, it enhances data organization, accessibility, collaboration, and compliance across various business functions. Additionally, it supports versioning, audit trails, and can trigger automation processes like workflows, validation rules, and triggers, ensuring robust document lifecycle management within the Salesforce ecosystem.\n\n**Field behavior**\n- Contains both the binary content (usually base64-encoded) and descriptive metadata of a file linked to a Salesforce record.\n- Supports a diverse array of file types including PDFs, images, reports, contracts, spreadsheets, and other common document formats.\n- Facilitates operations such as uploading new files, retrieving existing files, updating metadata, deleting documents, and managing versions within Salesforce.\n- May be required or optional depending on the specific API operation, object context, or organizational business rules.\n- Changes to the document content or metadata can trigger Salesforce workflows, triggers, validation rules, or automation processes.\n- Maintains versioning and audit trails when integrated with Salesforce Content or Files features.\n- Respects Salesforce security and sharing settings to control access and protect sensitive information.\n\n**Implementation guidance**\n- Encode the document content in base64 format when transmitting via API to ensure data integrity and compatibility.\n- Validate file size and type against Salesforce platform limits and organizational policies before upload to prevent errors.\n- Provide comprehensive metadata including file name, description, content type (MIME type), file extension, and relevant tags or categories to facilitate search, classification, and governance.\n- Manage user permissions and access controls in accordance with Salesforce security and sharing settings to safeguard sensitive data.\n- Use appropriate Salesforce API endpoints (e.g., /sobjects/Document, Attachment, ContentVersion) and HTTP methods aligned with the document type and intended operation.\n- Implement robust error handling to manage responses related to file size limits, unsupported formats, permission denials, or network issues.\n- Leverage Salesforce Content or Files features for enhanced document management capabilities such as version control, sharing,"},"attachment":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the attachment within the Salesforce system, serving as the immutable primary key for the attachment object. This ID is essential for accurately referencing, retrieving, updating, or deleting the specific attachment record through API calls. It must conform to Salesforce's standardized ID format, which can be either a 15-character case-sensitive or an 18-character case-insensitive alphanumeric string, with the 18-character version preferred for integrations due to its case insensitivity and reduced risk of errors. While the ID itself does not contain metadata about the attachment's content, it is crucial for linking the attachment to related Salesforce records and ensuring precise operations on the attachment resource across the platform.\n\n**Field behavior**\n- Acts as the unique and immutable primary key for the attachment record.\n- Required for all API operations involving the attachment, including retrieval, updates, and deletions.\n- Ensures consistent and precise identification of the attachment in all Salesforce API interactions.\n- Remains constant and unchangeable throughout the lifecycle of the attachment once created.\n\n**Implementation guidance**\n- Must strictly adhere to Salesforce ID formats: either 15-character case-sensitive or 18-character case-insensitive alphanumeric strings.\n- Should be dynamically obtained from Salesforce API responses when creating or querying attachments to ensure accuracy.\n- Avoid hardcoding or manually entering IDs to prevent data integrity issues and API errors.\n- Validate the ID format programmatically before use to ensure compatibility with Salesforce API requirements.\n- Prefer using the 18-character ID in integration scenarios to minimize case sensitivity issues.\n\n**Examples**\n- \"00P1t00000XyzAbCDE\"\n- \"00P1t00000XyzAbCDEAAA\"\n- \"00P3t00000LmNOpQR1\"\n\n**Important notes**\n- The 15-character ID is case-sensitive, whereas the 18-character ID is case-insensitive and recommended for integration scenarios.\n- Using an incorrect, malformed, or outdated ID will result in API errors or failed operations.\n- The ID does not convey any descriptive information about the attachment’s content or metadata.\n- Always retrieve and use the ID directly from Salesforce API responses to ensure validity and prevent errors.\n- The ID is unique within the Salesforce org and cannot be reused or duplicated.\n\n**Dependency chain**\n- Generated automatically by Salesforce upon creation of the attachment record.\n- Utilized by other API calls and properties that require precise attachment identification.\n- Linked to parent or related Salesforce objects, integrating the attachment"},"name":{"type":"string","description":"The name property represents the filename of the attachment within Salesforce and serves as the primary identifier for the attachment. It is crucial for enabling easy recognition, retrieval, and management of attachments both in the user interface and through API operations. This property typically includes the file extension, which specifies the file type and ensures proper handling, display, and processing of the file across different systems and platforms. The filename should be meaningful, descriptive, and concise to provide clarity and ease of use, especially when managing multiple attachments linked to a single parent record. Adhering to proper naming conventions helps maintain organization, improves searchability, and prevents conflicts or ambiguities that could arise from duplicate or unclear filenames.\n\n**Field behavior**\n- Stores the filename of the attachment as a string value.\n- Acts as the key identifier for the attachment within the Salesforce environment.\n- Displayed prominently in the Salesforce UI and API responses when viewing or managing attachments.\n- Should be unique within the context of the parent record to avoid confusion.\n- Includes file extensions (e.g., .pdf, .jpg, .docx) to denote the file type and facilitate correct processing.\n- Case sensitivity may apply depending on the context, affecting retrieval and display.\n- Used in URL generation and API endpoints, impacting accessibility and referencing.\n\n**Implementation guidance**\n- Validate filenames to exclude prohibited characters (such as \\ / : * ? \" < > |) and other special characters that could cause errors or security vulnerabilities.\n- Ensure the filename length does not exceed Salesforce’s maximum limit, typically 255 characters.\n- Always include the appropriate and accurate file extension to enable correct file type recognition.\n- Use clear, descriptive, and consistent naming conventions that reflect the content or purpose of the attachment.\n- Perform validation checks before upload to prevent failures or inconsistencies.\n- Avoid renaming attachments post-upload to maintain integrity of references and links.\n- Consider localization and encoding standards to support international characters where applicable.\n\n**Examples**\n- \"contract_agreement.pdf\"\n- \"profile_picture.jpg\"\n- \"sales_report_q1_2024.xlsx\"\n- \"meeting_notes.txt\"\n- \"project_plan_v2.docx\"\n\n**Important notes**\n- Filenames are case-sensitive in certain contexts, which can impact retrieval and display.\n- Avoid special characters that may interfere with URLs, file systems, or API processing.\n- Renaming an attachment after upload can disrupt existing references or links to the file.\n- Consistency between the filename and the actual content enhances user"},"parentId":{"type":"string","description":"The unique identifier of the parent Salesforce record to which the attachment is linked. This field establishes a direct and essential relationship between the attachment and its parent object, such as an Account, Contact, Opportunity, or any other Salesforce record type that supports attachments. It ensures that attachments are correctly associated within the Salesforce data model, enabling accurate retrieval, management, and organization of attachments in relation to their parent records. By linking attachments to their parent records, this field facilitates data integrity, efficient querying, and seamless navigation between related records. It is a mandatory field when creating or updating attachments and must reference a valid, existing Salesforce record ID. Proper use of this field guarantees referential integrity, enforces access controls based on parent record permissions, and impacts the visibility and lifecycle of the attachment within the Salesforce environment.\n\n**Field behavior**\n- Represents the Salesforce record ID that the attachment is associated with.\n- Mandatory when creating or updating an attachment to specify its parent record.\n- Must correspond to an existing Salesforce record ID of a valid object type that supports attachments.\n- Enables querying, filtering, and reporting of attachments based on their parent record.\n- Enforces referential integrity between attachments and parent records.\n- Changes to the parent record, including deletion or modification, can affect the visibility, accessibility, and lifecycle of the attachment.\n- Access to the attachment is governed by the permissions and sharing settings of the parent record.\n\n**Implementation guidance**\n- Ensure the parentId is a valid 15- or 18-character Salesforce record ID, with preference for 18-character IDs to avoid case sensitivity issues.\n- Verify the existence and active status of the parent record before assigning the parentId.\n- Confirm that the parent object type supports attachments to prevent errors during attachment operations.\n- Handle user permissions and sharing settings to ensure appropriate access rights to the parent record and its attachments.\n- Use Salesforce APIs or metadata services to validate the parentId format, object type, and existence.\n- Implement robust error handling for invalid, non-existent, or unauthorized parentId values during attachment creation or updates.\n- Consider the impact of parent record lifecycle events (such as deletion or archiving) on associated attachments.\n\n**Examples**\n- \"0011a000003XYZ123\" (Account record ID)\n- \"0031a000004ABC456\" (Contact record ID)\n- \"0061a000005DEF789\" (Opportunity record ID)\n\n**Important notes**\n- The parentId is critical for maintaining data integrity and ensuring"},"contentType":{"type":"string","description":"The MIME type of the attachment content, specifying the exact format and nature of the file (e.g., \"image/png\", \"application/pdf\", \"text/plain\"). This information is critical for correctly interpreting, processing, and displaying the attachment across different systems and platforms. It enables clients and servers to determine how to handle the file, whether to render it inline, prompt for download, or apply specific processing rules. Accurate contentType values facilitate validation, security scanning, and compatibility checks, ensuring that the attachment is managed appropriately throughout its lifecycle. Properly defined contentType values improve interoperability, user experience, and system security by enabling precise content handling and reducing the risk of errors or vulnerabilities.\n\n**Field behavior**\n- Defines the media type of the attachment to guide processing, rendering, and handling decisions.\n- Used by clients, servers, and intermediaries to determine appropriate actions such as inline display, download prompts, or specialized processing.\n- Supports validation of file format during upload, storage, and download operations to ensure data integrity.\n- Influences security measures including filtering, scanning for malware, and enforcing content policies.\n- Affects user interface elements like preview generation, icon selection, and content categorization.\n- May be used in logging, auditing, and analytics to track content types handled by the system.\n\n**Implementation guidance**\n- Must strictly adhere to the standard MIME type format as defined by IANA (e.g., \"type/subtype\").\n- Should accurately reflect the actual content format to prevent misinterpretation or processing errors.\n- Use generic types such as \"application/octet-stream\" only when the specific type cannot be determined.\n- Include optional parameters (e.g., charset, boundary) when relevant to fully describe the content.\n- Ensure the contentType is set or updated whenever the attachment content changes to maintain consistency.\n- Validate the contentType against the file extension and actual content where feasible to enhance reliability.\n- Consider normalizing the contentType to lowercase to maintain consistency across systems.\n- Avoid relying solely on contentType for security decisions; combine with other metadata and content inspection.\n\n**Examples**\n- \"image/jpeg\"\n- \"application/pdf\"\n- \"text/plain; charset=utf-8\"\n- \"application/vnd.ms-excel\"\n- \"audio/mpeg\"\n- \"video/mp4\"\n- \"application/json\"\n- \"multipart/form-data; boundary=something\"\n- \"application/zip\"\n\n**Important notes**\n- An incorrect, missing, or misleading contentType can lead to improper"},"isPrivate":{"type":"boolean","description":"Indicates whether the attachment is designated as private, restricting its visibility exclusively to users who have been explicitly authorized to access it. This setting is critical for maintaining confidentiality and ensuring that sensitive or proprietary information contained within attachments is accessible only to appropriate personnel. When marked as private, the attachment’s visibility is constrained based on the user’s permissions and the sharing settings of the parent record, thereby reinforcing data security within the Salesforce environment. This field directly controls the scope of access to the attachment, dynamically influencing who can view or interact with it, and must be managed carefully to align with organizational privacy policies and compliance requirements.\n\n**Field behavior**\n- When set to true, the attachment is accessible only to users with explicit authorization, effectively hiding it from unauthorized users.\n- When set to false or omitted, the attachment inherits the visibility of its parent record and is accessible to all users who have access to that record.\n- Directly affects the attachment’s visibility scope and access control within Salesforce.\n- Changes to this field immediately impact user access permissions for the attachment.\n- Does not override broader organizational sharing settings but further restricts access.\n- The privacy setting applies in conjunction with existing record-level and object-level security controls.\n\n**Implementation guidance**\n- Use this field to enforce strict data privacy and protect sensitive attachments in compliance with organizational policies and regulatory requirements.\n- Ensure that user roles, profiles, and sharing rules are properly configured to respect this privacy setting.\n- Validate that the value is a boolean (true or false) to prevent data inconsistencies.\n- Evaluate the implications on existing sharing rules and record-level security before modifying this field.\n- Consider implementing audit logging to track changes to this privacy setting for security and compliance purposes.\n- Test changes in a non-production environment to understand the impact on user access before deploying to production.\n- Communicate changes to relevant stakeholders to avoid confusion or unintended access issues.\n\n**Examples**\n- `true` — The attachment is private and visible only to users explicitly authorized to access it.\n- `false` — The attachment is public within the context of the parent record and visible to all users who have access to that record.\n\n**Important notes**\n- Setting this field to true does not grant access to users who lack permission to view the parent record; it only further restricts access.\n- Modifying this setting can significantly affect user access and should be managed carefully to avoid unintended data exposure or access denial.\n- Some Salesforce editions or configurations may have limitations or specific behaviors regarding attachment privacy"},"description":{"type":"string","description":"A textual description providing additional context, detailed information, and clarifying the content, purpose, or relevance of the attachment within Salesforce. This field enhances user understanding and facilitates easier identification, organization, and management of attachments by serving as descriptive metadata that complements the attachment’s core data without containing the attachment content itself. It supports plain text with special characters, punctuation, and full Unicode, enabling expressive, internationalized, and accessible descriptions. Typically displayed in user interfaces alongside attachment details, this description also plays a crucial role in search, filtering, sorting, and reporting operations, improving overall data discoverability and usability.\n\n**Field behavior**\n- Optional and editable field used to describe the attachment’s nature or relevance.\n- Supports plain text input including special characters, punctuation, and Unicode.\n- Displayed prominently in user interfaces where attachment details appear.\n- Utilized in search, filtering, sorting, and reporting functionalities to enhance data retrieval.\n- Does not contain or affect the actual attachment content or file storage.\n\n**Implementation guidance**\n- Encourage concise yet informative descriptions to aid quick identification.\n- Validate and sanitize input to prevent injection attacks and ensure data integrity.\n- Support full Unicode to accommodate international characters and symbols.\n- Enforce a reasonable maximum length (commonly 255 characters) to maintain performance and UI consistency.\n- Avoid inclusion of sensitive or confidential information unless appropriate security controls are applied.\n- Implement trimming or sanitization routines to maintain clean and consistent data.\n\n**Examples**\n- \"Quarterly financial report for Q1 2024.\"\n- \"Signed contract agreement with client XYZ.\"\n- \"Product brochure for the new model launch.\"\n- \"Meeting notes and action items from April 15, 2024.\"\n- \"Technical specification document for project Alpha.\"\n\n**Important notes**\n- This field contains descriptive metadata only, not the attachment content.\n- Sensitive information should be excluded or protected according to security policies.\n- Changes to this field do not impact the attachment file or its storage.\n- Consistent and accurate descriptions improve attachment management and user experience.\n- Plays a key role in enhancing searchability and filtering within Salesforce.\n\n**Dependency chain**\n- Directly associated with the attachment entity in Salesforce.\n- Often used in conjunction with other metadata fields such as attachment name, type, owner, size, and creation date.\n- Supports comprehensive context when combined with related records and metadata.\n\n**Technical details**\n- Data type: string.\n- Maximum length: typically 255 characters, configurable"}},"description":"The attachment property represents a file or document linked to a Salesforce record, encompassing both the binary content of the file and its associated metadata. It serves as a versatile container for a wide variety of file types—including documents, images, PDFs, spreadsheets, and emails—allowing users to enrich Salesforce records such as emails, cases, opportunities, or any standard or custom objects with supplementary information or evidence. This property supports comprehensive lifecycle management, enabling operations such as uploading new files, retrieving existing attachments, updating file details, and deleting attachments as needed. By maintaining a direct association with the parent Salesforce record through identifiers like ParentId, it ensures accurate linkage and seamless integration within the Salesforce ecosystem. The attachment property facilitates enhanced collaboration, documentation, and data completeness by providing a centralized and manageable way to handle files related to Salesforce data.\n\n**Field behavior**\n- Stores the binary content or a reference to a file linked to a specific Salesforce record.\n- Supports multiple file formats including but not limited to documents, images, PDFs, spreadsheets, and email files.\n- Enables attaching additional context, documentation, or evidence to Salesforce objects.\n- Allows full lifecycle management: create, read, update, and delete operations on attachments.\n- Contains metadata such as file name, MIME content type, file size, and the ID of the parent Salesforce record.\n- Reflects changes immediately in the associated Salesforce record’s file attachments.\n- Maintains a direct association with the parent record through the ParentId field to ensure accurate linkage.\n- Supports integration with Salesforce Files (ContentVersion and ContentDocument) for advanced file management features.\n\n**Implementation guidance**\n- Encode file content using Base64 encoding when transmitting via Salesforce APIs to ensure data integrity.\n- Validate file size against Salesforce limits (commonly up to 25 MB per attachment) and confirm supported file types before upload.\n- Use the ParentId field to correctly associate the attachment with the intended Salesforce record.\n- Implement robust permission checks to ensure only authorized users can upload, view, or modify attachments.\n- Consider leveraging Salesforce Files (ContentVersion and ContentDocument objects) for enhanced file management capabilities, versioning, and sharing features, especially for new implementations.\n- Handle error responses gracefully, including those related to file size limits, unsupported formats, or permission issues.\n- Ensure metadata fields such as file name and content type are accurately populated to facilitate proper handling and display.\n- When updating attachments, confirm that the new content or metadata aligns with organizational policies and compliance requirements.\n- Optimize performance"},"contentVersion":{"type":"object","properties":{"contentDocumentId":{"type":"string","description":"contentDocumentId is the unique identifier for the parent Content Document associated with a specific Content Version in Salesforce. It serves as the fundamental reference that links all versions of the same document, enabling comprehensive version control and efficient document management within the Salesforce ecosystem. This ID is immutable once the Content Version is created, ensuring a stable and consistent connection across all iterations of the document. By maintaining this linkage, Salesforce facilitates seamless navigation between individual document versions and their overarching document entity, supporting streamlined retrieval, updates, collaboration, and organizational workflows. This field is automatically assigned by Salesforce during the creation of a Content Version and is critical for maintaining the integrity and traceability of document versions throughout their lifecycle.\n\n**Field behavior**\n- Represents the unique ID of the parent Content Document to which the Content Version belongs.\n- Acts as the primary link connecting multiple versions of the same document.\n- Immutable after the Content Version record is created, preserving data integrity.\n- Enables navigation from a specific Content Version to its parent Content Document.\n- Automatically assigned and managed by Salesforce during Content Version creation.\n- Essential for maintaining consistent version histories and document relationships.\n\n**Implementation guidance**\n- Use this ID to reference or retrieve the entire document across all its versions.\n- Verify that the ID corresponds to an existing Content Document before usage.\n- Utilize in SOQL queries and API calls to associate Content Versions with their parent document.\n- Avoid modifying this field directly; it is managed internally by Salesforce.\n- Employ this ID to support document lifecycle management, version tracking, and collaboration features.\n- Leverage this field to ensure accurate linkage in integrations and custom development.\n\n**Examples**\n- \"0691t00000XXXXXXAAA\"\n- \"0692a00000YYYYYYBBB\"\n- \"0693b00000ZZZZZZCCC\"\n\n**Important notes**\n- Distinct from the Content Version ID, which identifies a specific version rather than the entire document.\n- Mandatory for any valid Content Version record to ensure proper document association.\n- Cannot be null or empty for Content Version records.\n- Critical for preserving document integrity, version tracking, and enabling collaborative document management.\n- Changes to this ID are not permitted post-creation to avoid breaking version linkages.\n- Plays a key role in Salesforce’s document sharing, access control, and audit capabilities.\n\n**Dependency chain**\n- Requires the existence of a valid Content Document record in Salesforce.\n- Directly related to Content Version records representing different iterations of the same document.\n- Integral"},"title":{"type":"string","description":"The title of the content version serves as the primary name or label assigned to this specific iteration of content within Salesforce. It acts as a key identifier that enables users to quickly recognize, differentiate, and manage various versions of content. The title typically summarizes the subject matter, purpose, or significant details about the content, making it easier to locate and understand at a glance. It should be concise yet sufficiently descriptive to effectively convey the essence and context of the content version. Well-crafted titles improve navigation, searchability, and version tracking within content management workflows, enhancing overall user experience and operational efficiency.\n\n**Field behavior**\n- Functions as the main identifier or label for the content version in user interfaces, listings, and reports.\n- Enables users to distinguish between different versions of the same content efficiently.\n- Reflects the content’s subject, purpose, key attributes, or version status.\n- Should be concise but informative to facilitate quick recognition and comprehension.\n- Commonly displayed in search results, filters, version histories, and content libraries.\n- Supports sorting and filtering operations to streamline content management.\n\n**Implementation guidance**\n- Use clear, descriptive titles that accurately represent the content version’s focus or changes.\n- Avoid special characters or formatting that could cause display, parsing, or processing issues.\n- Keep titles within a reasonable length (typically up to 255 characters) to ensure readability across various UI components and devices.\n- Update the title appropriately when creating new versions to reflect significant updates or revisions.\n- Consider including version numbers, dates, or status indicators to enhance clarity and version tracking.\n- Follow organizational naming conventions and standards to maintain consistency.\n\n**Examples**\n- \"Q2 Financial Report 2024\"\n- \"Product Launch Presentation v3\"\n- \"Employee Handbook - Updated March 2024\"\n- \"Marketing Strategy Overview\"\n- \"Website Redesign Proposal - Final Draft\"\n\n**Important notes**\n- Titles do not need to be globally unique but should be meaningful and contextually relevant.\n- Changing the title does not modify the underlying content data or its integrity.\n- Titles are frequently used in search, filtering, sorting, and reporting operations within Salesforce.\n- Consistent naming conventions and formatting improve content management efficiency and user experience.\n- Consider organizational standards or guidelines when defining title formats.\n- Titles should avoid ambiguity to prevent confusion between similar content versions.\n\n**Dependency chain**\n- Part of the ContentVersion object within Salesforce’s content management framework.\n- Often associated with other metadata fields such as description"},"pathOnClient":{"type":"string","description":"The pathOnClient property specifies the original file path of the content as it exists on the client machine before being uploaded to Salesforce. This path serves as a precise reference to the source location of the file on the user's local system, facilitating identification, auditing, and tracking of the file's origin. It captures either the full absolute path or a relative path depending on the client environment and upload context, accurately reflecting the exact location from which the file was sourced. This information is primarily used for informational, diagnostic, and user interface purposes within Salesforce and does not influence how the file is stored, accessed, or secured in the system.\n\n**Field behavior**\n- Represents the full absolute or relative file path on the client device where the file was stored prior to upload.\n- Automatically populated during file upload processes when the client environment provides this information.\n- Used mainly for reference, auditing, and user interface display to provide context about the file’s origin.\n- May be empty, null, or omitted if the file was not uploaded from a client device or if the path information is unavailable or restricted.\n- Does not influence file storage, retrieval, or access permissions within Salesforce.\n- Retains the original formatting and syntax as provided by the client system at upload time.\n\n**Implementation guidance**\n- Capture the file path exactly as provided by the client system at the time of upload, preserving the original format and case sensitivity.\n- Handle differences in file path syntax across operating systems (e.g., backslashes for Windows, forward slashes for Unix/Linux/macOS).\n- Treat this property strictly as metadata for informational purposes; avoid using it for security, access control, or file retrieval logic.\n- Sanitize or mask sensitive directory or user information when displaying or logging this path to protect user privacy and comply with data protection regulations.\n- Ensure compliance with relevant privacy laws and organizational policies when storing or exposing client file paths.\n- Consider truncating, normalizing, or encoding paths if necessary to conform to platform length limits, display constraints, or to prevent injection vulnerabilities.\n- Provide clear documentation to users and administrators about the non-security nature of this property to prevent misuse.\n\n**Examples**\n- \"C:\\\\Users\\\\JohnDoe\\\\Documents\\\\Report.pdf\"\n- \"/home/janedoe/projects/presentation.pptx\"\n- \"Documents/Invoices/Invoice123.pdf\"\n- \"Desktop/Project Files/DesignMockup.png\"\n- \"../relative/path/to/file.txt\"\n\n**Important notes**\n- This property does not"},"tagCsv":{"type":"string","description":"A comma-separated string representing the tags associated with the content version in Salesforce. This field enables the assignment of multiple descriptive tags within a single string, facilitating efficient categorization, organization, and enhanced searchability of the content. Tags serve as metadata labels that help users filter, locate, and manage content versions effectively across the platform by grouping related items and supporting advanced search and filtering capabilities. Proper use of this field improves content discoverability, streamlines content management workflows, and supports dynamic content classification.\n\n**Field behavior**\n- Accepts multiple tags separated by commas, typically without spaces, though trimming whitespace is recommended.\n- Tags are used to categorize, organize, and improve the discoverability of content versions.\n- Tags can be added, updated, or removed by modifying the entire string value.\n- Duplicate tags within the string should be avoided to maintain clarity and effectiveness.\n- Tags are generally treated as case-insensitive for search purposes but stored exactly as entered.\n- Changes to this field immediately affect how content versions are indexed and retrieved in search operations.\n\n**Implementation guidance**\n- Ensure individual tags do not contain commas to prevent parsing errors.\n- Validate the tagCsv string for invalid characters, excessive length, and proper formatting.\n- When updating tags, replace the entire string to accurately reflect the current tag set.\n- Trim leading and trailing whitespace from each tag before processing or storage.\n- Utilize this field to enhance search filters, categorization, and content management workflows.\n- Implement application-level checks to prevent duplicate or meaningless tags.\n- Consider standardizing tag formats (e.g., lowercase) to maintain consistency across records.\n\n**Examples**\n- \"marketing,Q2,presentation\"\n- \"urgent,clientA,confidential\"\n- \"releaseNotes,v1.2,approved\"\n- \"finance,yearEnd,reviewed\"\n\n**Important notes**\n- The maximum length of the tagCsv string is subject to Salesforce field size limitations.\n- Tags should be meaningful, consistent, and standardized to maximize their utility.\n- Modifications to tags can impact content visibility and access if used in filtering or permission logic.\n- This field does not enforce uniqueness of tags; application-level logic should handle duplicate prevention.\n- Tags are stored as-is; any normalization or case conversion must be handled externally.\n- Improper formatting or invalid characters may cause errors or unexpected behavior in search and filtering.\n\n**Dependency chain**\n- Closely related to contentVersion metadata and search/filtering functionality.\n- May integrate with Salesforce"},"contentLocation":{"type":"string","description":"contentLocation specifies the precise storage location of the content file associated with the Salesforce ContentVersion record. It identifies where the actual file data resides—whether within Salesforce's internal storage infrastructure, an external content management system, or a linked external repository—and governs how the content is accessed, retrieved, and managed throughout its lifecycle. This field plays a crucial role in content delivery, access permissions, and integration workflows by clearly indicating the source location of the file data. Properly setting and interpreting contentLocation ensures that content is handled correctly according to its storage context, enabling seamless access when stored internally or facilitating robust integration and synchronization with third-party external systems.\n\n**Field behavior**\n- Determines the origin and storage context of the content file linked to the ContentVersion record.\n- Influences the mechanisms and protocols used for accessing, retrieving, and managing the content.\n- Typically assigned automatically by Salesforce based on the upload method or integration configuration.\n- Common values include 'S' for Salesforce internal storage, 'E' for external storage systems, and 'L' for linked external locations.\n- Often read-only to preserve data integrity and prevent unauthorized or unintended modifications.\n- Alterations to this field can impact content visibility, sharing settings, and delivery workflows.\n\n**Implementation guidance**\n- Utilize this field to programmatically distinguish between internally stored content and externally managed files.\n- When integrating with external repositories, ensure contentLocation accurately reflects the external storage to enable correct content handling and retrieval.\n- Avoid manual updates unless supported by Salesforce documentation and accompanied by a comprehensive understanding of the consequences.\n- Validate the contentLocation value prior to processing or delivering content to guarantee proper routing and access control.\n- Confirm that external storage systems are correctly configured, authenticated, and authorized to maintain uninterrupted access.\n- Monitor this field closely during content migration or integration activities to prevent data inconsistencies or access disruptions.\n\n**Examples**\n- 'S' — Content physically stored within Salesforce’s internal storage infrastructure.\n- 'E' — Content residing in an external system integrated with Salesforce, such as a third-party content repository.\n- 'L' — Content located in a linked external location, representing specialized or less common storage configurations.\n\n**Important notes**\n- Manual modifications to contentLocation can lead to broken links, access failures, or data inconsistencies.\n- Salesforce generally manages this field automatically to uphold consistency and reliability.\n- The contentLocation value directly influences content delivery methods and access control policies.\n- External content locations require proper configuration, authentication, and permissions to function correctly"}},"description":"The contentVersion property serves as the unique identifier for a specific iteration of a content item within the Salesforce platform's content management system. It enables precise tracking, retrieval, and management of individual versions or revisions of documents, files, or digital assets. Each contentVersion corresponds to a distinct, immutable state of the content at a given point in time, facilitating robust version control, content integrity, and auditability. When a content item is created or modified, Salesforce automatically generates a new contentVersion to capture those changes, allowing users and systems to access, compare, or revert to previous versions as necessary. This property is fundamental for workflows involving content updates, approvals, compliance tracking, and historical content analysis.\n\n**Field behavior**\n- Uniquely identifies a single, immutable version of a content item within Salesforce.\n- Differentiates between multiple revisions or updates of the same content asset.\n- Automatically created or incremented by Salesforce upon content creation or modification.\n- Used in API operations to retrieve, reference, update metadata, or manage specific content versions.\n- Remains constant once assigned; any content changes result in a new contentVersion record.\n- Supports audit trails by preserving historical versions of content.\n\n**Implementation guidance**\n- Ensure synchronization of contentVersion values with Salesforce to maintain consistency across integrated systems.\n- Utilize this property for version-specific operations such as downloading content, updating metadata, or deleting particular versions.\n- Validate contentVersion identifiers against Salesforce API standards to avoid errors or mismatches.\n- Implement comprehensive error handling for scenarios where contentVersion records are missing, deprecated, or inaccessible due to permission restrictions.\n- Consider access control differences at the version level, as visibility and permissions may vary between versions.\n- Leverage contentVersion in workflows requiring version comparison, rollback, or approval processes.\n\n**Examples**\n- \"0681t000000XyzA\" (initial version of a document)\n- \"0681t000000XyzB\" (a subsequent, updated version reflecting content changes)\n- \"0681t000000XyzC\" (a further revised version incorporating additional edits)\n\n**Important notes**\n- contentVersion is distinct from contentDocumentId; contentVersion identifies a specific version, whereas contentDocumentId refers to the overall document entity.\n- Proper versioning is critical for maintaining content integrity, supporting audit trails, and ensuring regulatory compliance.\n- Access permissions and visibility can differ between content versions, potentially affecting user access and operations.\n- The contentVersion ID typically begins with the prefix"}}},"File":{"type":"object","description":"**CRITICAL: This object is REQUIRED for all file-based import adaptor types.**\n\n**When to include this object**\n\n✅ **MUST SET** when `adaptorType` is one of:\n- `S3Import`\n- `FTPImport`\n- `AS2Import`\n\n❌ **DO NOT SET** for non-file-based imports like:\n- `SalesforceImport`\n- `NetSuiteImport`\n- `HTTPImport`\n- `MongodbImport`\n- `RDBMSImport`\n\n**Minimum required fields**\n\nFor most file imports, you need at minimum:\n- `fileName`: The output file name (supports Handlebars like `{{timestamp}}`)\n- `type`: The file format (json, csv, xml, xlsx)\n- `skipAggregation`: Usually `false` for standard imports\n\n**Example (S3 Import)**\n\n```json\n{\n  \"file\": {\n    \"fileName\": \"customers-{{timestamp}}.json\",\n    \"skipAggregation\": false,\n    \"type\": \"json\"\n  },\n  \"s3\": {\n    \"region\": \"us-east-1\",\n    \"bucket\": \"my-bucket\",\n    \"fileKey\": \"customers-{{timestamp}}.json\"\n  },\n  \"adaptorType\": \"S3Import\"\n}\n```","properties":{"fileName":{"type":"string","description":"**REQUIRED for file-based imports.**\n\nThe name of the file to be created/written. Supports Handlebars expressions for dynamic naming.\n\n**Default behavior**\n- When the user does not specify a file naming convention, **always default to timestamped filenames** using `{{timestamp}}` (e.g., `\"items-{{timestamp}}.csv\"`). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Common patterns**\n- `\"data-{{timestamp}}.json\"` - Timestamped JSON file (DEFAULT — use this pattern when not specified)\n- `\"export-{{date}}.csv\"` - Date-stamped CSV file\n- `\"{{recordType}}-backup.xml\"` - Dynamic record type naming\n\n**Examples**\n- `\"customers-{{timestamp}}.json\"`\n- `\"orders-export.csv\"`\n- `\"inventory-{{date}}.xlsx\"`\n\n**Important**\n- For S3 imports, this should typically match the `s3.fileKey` value\n- For FTP imports, this should typically match the `ftp.fileName` value"},"skipAggregation":{"type":"boolean","description":"Controls whether records are aggregated into a single file or processed individually.\n\n**Values**\n- `false` (DEFAULT): Records are aggregated into a single output file\n- `true`: Each record is written as a separate file\n\n**When to use**\n- **`false`**: Standard batch imports where all records go into one file\n- **`true`**: When each record needs its own file (e.g., individual documents)\n\n**Default:** `false`","default":false},"type":{"type":"string","enum":["json","csv","xml","xlsx","filedefinition"],"description":"**REQUIRED for file-based imports.**\n\nThe format of the output file.\n\n**Options**\n- `\"json\"`: JSON format (most common for API data)\n- `\"csv\"`: Comma-separated values (tabular data)\n- `\"xml\"`: XML format\n- `\"xlsx\"`: Excel spreadsheet format\n- `\"filedefinition\"`: Custom file definition format\n\n**Selection guidance**\n- Use `\"json\"` for structured/nested data, API payloads\n- Use `\"csv\"` for flat tabular data, spreadsheet-like records\n- Use `\"xml\"` for XML-based integrations, SOAP services\n- Use `\"xlsx\"` for Excel-compatible outputs"},"encoding":{"type":"string","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"],"description":"Character encoding for the output file.\n\n**Default:** `\"utf8\"`\n\n**When to change**\n- `\"win1252\"`: Legacy Windows systems\n- `\"utf-16le\"`: Unicode with BOM requirements\n- `\"gb18030\"`: Chinese character sets\n- `\"shiftjis\"`: Japanese character sets","default":"utf8"},"delete":{"type":"boolean","description":"Whether to delete the source file after successful import.\n\n**Values**\n- `true`: Delete source file after processing\n- `false`: Keep source file\n\n**Default:** `false`","default":false},"compressionFormat":{"type":"string","enum":["gzip","zip"],"description":"Compression format for the output file.\n\n**Options**\n- `\"gzip\"`: GZIP compression (.gz)\n- `\"zip\"`: ZIP compression (.zip)\n\n**When to use**\n- Large files that benefit from compression\n- When target system expects compressed files"},"backupPath":{"type":"string","description":"Path where backup copies of files should be stored.\n\n**Examples**\n- `\"backup/\"` - Relative backup folder\n- `\"/archive/2024/\"` - Absolute backup path"},"purgeInternalBackup":{"type":"boolean","description":"Whether to purge internal backup copies after successful processing.\n\n**Default:** `false`","default":false},"batchSize":{"type":"integer","description":"Number of records to include per batch/file when processing large datasets.\n\n**When to use**\n- Large imports that need to be split into multiple files\n- When target system has file size limitations\n\n**Note**\n- Only applicable when `skipAggregation` is `true`"},"encrypt":{"type":"boolean","description":"Whether to encrypt the output file.\n\n**Values**\n- `true`: Encrypt the file (requires PGP configuration)\n- `false`: No encryption\n\n**Default:** `false`","default":false},"csv":{"type":"object","description":"CSV-specific configuration. Only used when `type` is `\"csv\"`.","properties":{"rowDelimiter":{"type":"string","description":"Character(s) used to separate rows. Default is newline.","default":"\n"},"columnDelimiter":{"type":"string","description":"Character(s) used to separate columns. Default is comma.","default":","},"includeHeader":{"type":"boolean","description":"Whether to include header row with column names.","default":true},"wrapWithQuotes":{"type":"boolean","description":"Whether to wrap field values in quotes.","default":false},"replaceTabWithSpace":{"type":"boolean","description":"Replace tab characters with spaces.","default":false},"replaceNewlineWithSpace":{"type":"boolean","description":"Replace newline characters with spaces within fields.","default":false},"truncateLastRowDelimiter":{"type":"boolean","description":"Remove trailing row delimiter from the file.","default":false}}},"json":{"type":"object","description":"JSON-specific configuration. Only used when `type` is `\"json\"`.","properties":{"resourcePath":{"type":"string","description":"JSONPath expression to locate records within the JSON structure."}}},"xml":{"type":"object","description":"XML-specific configuration. Only used when `type` is `\"xml\"`.","properties":{"resourcePath":{"type":"string","description":"XPath expression to locate records within the XML structure."}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"Reference to the file definition resource."}}}}},"FileSystem":{"type":"object","description":"Configuration for FileSystem imports","properties":{"directoryPath":{"type":"string","description":"Directory path to read files from (required)"}},"required":["directoryPath"]},"AiAgent":{"type":"object","description":"AI Agent configuration used by both AiAgentImport and GuardrailImport (ai_agent type).\n\nConfigures which AI provider and model to use, along with instructions, parameter\ntuning, output format, and available tools. Two providers are supported:\n\n- **openai**: OpenAI models (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)\n- **gemini**: Google Gemini models via LiteLLM proxy\n\nA `_connectionId` is optional (BYOK). If not provided on the parent import,\nplatform-managed credentials are used.\n","properties":{"provider":{"type":"string","enum":["openai","gemini"],"default":"openai","description":"AI provider to use.\n\n- **openai**: Uses OpenAI Responses API. Configure via the `openai` object.\n- **gemini**: Uses Google Gemini via LiteLLM. Configure via `litellm` with\n  Gemini-specific overrides in `litellm._overrides.gemini`.\n"},"openai":{"type":"object","description":"OpenAI-specific configuration. Used when `provider` is \"openai\".\n","properties":{"instructions":{"type":"string","maxLength":51200,"description":"System prompt that defines the AI agent's behavior, goals, and constraints.\nMaximum 50 KB.\n"},"model":{"type":"string","description":"OpenAI model identifier.\n"},"reasoning":{"type":"object","description":"Controls depth of reasoning for complex tasks.\n","properties":{"effort":{"type":"string","enum":["minimal","low","medium","high"],"description":"How much reasoning effort the model should invest"},"summary":{"type":"string","enum":["concise","auto","detailed"],"description":"Level of detail in reasoning summaries"}}},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature. Higher values (e.g. 1.5) produce more creative output,\nlower values (e.g. 0.2) produce more focused and deterministic output.\n"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"maxOutputTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the model's response"},"serviceTier":{"type":"string","enum":["auto","default","priority"],"default":"default","description":"OpenAI service tier. \"priority\" provides higher rate limits and\nlower latency at increased cost.\n"},"output":{"type":"object","description":"Output format configuration","properties":{"format":{"type":"object","description":"Controls the structure of the model's output.\n","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text","description":"Output format type.\n\n- **text**: Free-form text response\n- **json_schema**: Structured JSON conforming to a schema\n- **blob**: Binary data output\n"},"name":{"type":"string","description":"Name for the output format (used with json_schema)"},"strict":{"type":"boolean","default":false,"description":"Whether to enforce strict schema validation on output"},"jsonSchema":{"type":"object","description":"JSON Schema for structured output. Required when `format.type` is \"json_schema\".\n","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalproperties":{"type":"boolean"}}}}},"verbose":{"type":"string","enum":["low","medium","high"],"default":"medium","description":"Level of detail in the model's response"}}},"tools":{"type":"array","description":"Tools available to the AI agent during processing.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["web_search","mcp","image_generation","tool"],"description":"Type of tool.\n\n- **web_search**: Search the web for information\n- **mcp**: Connect to an MCP server for additional tools\n- **image_generation**: Generate images\n- **tool**: Reference a Celigo Tool resource\n"},"webSearch":{"type":"object","description":"Web search configuration (empty object to enable)"},"imageGeneration":{"type":"object","description":"Image generation configuration","properties":{"background":{"type":"string","enum":["transparent","opaque"]},"quality":{"type":"string","enum":["low","medium","high"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"]},"outputFormat":{"type":"string","enum":["png","webp","jpeg"]}}},"mcp":{"type":"object","description":"MCP server tool configuration","properties":{"_mcpConnectionId":{"type":"string","format":"objectId","description":"Connection to the MCP server"},"allowedTools":{"type":"array","description":"Specific tools to allow from the MCP server (all if omitted)","items":{"type":"string"}}}},"tool":{"type":"object","description":"Reference to a Celigo Tool resource.\n","properties":{"_toolId":{"type":"string","format":"objectId","description":"Reference to the Tool resource"},"overrides":{"type":"object","description":"Per-agent overrides for the tool's internal resources"}}}}}}}},"litellm":{"type":"object","description":"LiteLLM proxy configuration. Used when `provider` is \"gemini\".\n\nLiteLLM provides a unified interface to multiple AI providers.\nGemini-specific settings are in `_overrides.gemini`.\n","properties":{"model":{"type":"string","description":"LiteLLM model identifier"},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature"},"maxCompletionTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the response"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"seed":{"type":"number","description":"Random seed for reproducible outputs"},"responseFormat":{"type":"object","description":"Output format configuration","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text"},"name":{"type":"string"},"strict":{"type":"boolean","default":false},"jsonSchema":{"type":"object","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalProperties":{"type":"boolean"}}}}},"_overrides":{"type":"object","description":"Provider-specific overrides","properties":{"gemini":{"type":"object","description":"Gemini-specific configuration overrides.\n","properties":{"systemInstruction":{"type":"string","maxLength":51200,"description":"System instruction for Gemini models. Equivalent to OpenAI's `instructions`.\nMaximum 50 KB.\n"},"tools":{"type":"array","description":"Gemini-specific tools","items":{"type":"object","properties":{"type":{"type":"string","enum":["googleSearch","urlContext","fileSearch","mcp","tool"],"description":"Type of Gemini tool.\n\n- **googleSearch**: Google Search grounding\n- **urlContext**: URL content retrieval\n- **fileSearch**: Search uploaded files\n- **mcp**: Connect to an MCP server\n- **tool**: Reference a Celigo Tool resource\n"},"googleSearch":{"type":"object","description":"Google Search configuration (empty object to enable)"},"urlContext":{"type":"object","description":"URL context configuration (empty object to enable)"},"fileSearch":{"type":"object","properties":{"fileSearchStoreNames":{"type":"array","items":{"type":"string"}}}},"mcp":{"type":"object","properties":{"_mcpConnectionId":{"type":"string","format":"objectId"},"allowedTools":{"type":"array","items":{"type":"string"}}}},"tool":{"type":"object","properties":{"_toolId":{"type":"string","format":"objectId"},"overrides":{"type":"object"}}}}}},"responseModalities":{"type":"array","description":"Response output modalities","items":{"type":"string","enum":["text","image"]},"default":["text"]},"topK":{"type":"number","description":"Top-K sampling parameter for Gemini"},"thinkingConfig":{"type":"object","description":"Controls Gemini's extended thinking capabilities","properties":{"includeThoughts":{"type":"boolean","description":"Whether to include thinking steps in the response"},"thinkingBudget":{"type":"number","minimum":100,"maximum":4000,"description":"Maximum tokens allocated for thinking"},"thinkingLevel":{"type":"string","enum":["minimal","low","medium","high"]}}},"imageConfig":{"type":"object","description":"Gemini image generation configuration","properties":{"aspectRatio":{"type":"string","enum":["1:1","2:3","3:2","3:4","4:3","4:5","5:4","9:16","16:9","21:9"]},"imageSize":{"type":"string","enum":["1K","2K","4k"]}}},"mediaResolution":{"type":"string","enum":["low","medium","high"],"description":"Resolution for media inputs (images, video)"}}}}}}}}},"Guardrail":{"type":"object","description":"Configuration for GuardrailImport adaptor type.\n\nGuardrails are safety and compliance checks that can be applied to data\nflowing through integrations. Three types of guardrails are supported:\n\n- **ai_agent**: Uses an AI model to evaluate data against custom instructions\n- **pii**: Detects and optionally masks personally identifiable information\n- **moderation**: Checks content against moderation categories (e.g. `hate`, `violence`, `harassment`, `sexual`, `self_harm`, `illicit`). Use category names exactly as listed in `moderation.categories` — e.g. `hate` (not \"hate speech\"), `violence` (not \"violent content\").\n\nGuardrail imports do not require a `_connectionId` (unless using BYOK for `ai_agent` type).\n","properties":{"type":{"type":"string","enum":["ai_agent","pii","moderation"],"description":"The type of guardrail to apply.\n\n- **ai_agent**: Evaluate data using an AI model with custom instructions.\n  Requires the `aiAgent` sub-configuration.\n- **pii**: Detect personally identifiable information in data.\n  Requires the `pii` sub-configuration with at least one entity type.\n- **moderation**: Check content for harmful categories.\n  Requires the `moderation` sub-configuration with at least one category.\n"},"confidenceThreshold":{"type":"number","minimum":0,"maximum":1,"default":0.7,"description":"Confidence threshold for guardrail detection (0 to 1).\n\nOnly detections with confidence at or above this threshold will be flagged.\nLower values catch more potential issues but may increase false positives.\n"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"pii":{"type":"object","description":"Configuration for PII (Personally Identifiable Information) detection.\n\nRequired when `guardrail.type` is \"pii\". At least one entity type must be specified.\n","properties":{"entities":{"type":"array","description":"PII entity types to detect in the data.\n\nAt least one entity must be specified when using PII guardrails.\n","items":{"type":"string","enum":["credit_card_number","card_security_code_cvv_cvc","cryptocurrency_wallet_address","date_and_time","email_address","iban_code","bic_swift_bank_identifier_code","ip_address","location","medical_license_number","national_registration_number","persons_name","phone_number","url","us_bank_account_number","us_drivers_license","us_itin","us_passport_number","us_social_security_number","uk_nhs_number","uk_national_insurance_number","spanish_nif","spanish_nie","italian_fiscal_code","italian_drivers_license","italian_vat_code","italian_passport","italian_identity_card","polish_pesel","finnish_personal_identity_code","singapore_nric_fin","singapore_uen","australian_abn","australian_acn","australian_tfn","australian_medicare","indian_pan","indian_aadhaar","indian_vehicle_registration","indian_voter_id","indian_passport","korean_resident_registration_number"]}},"mask":{"type":"boolean","default":false,"description":"Whether to mask detected PII values in the output.\n\nWhen true, detected PII is replaced with masked values.\nWhen false, PII is only flagged without modification.\n"}}},"moderation":{"type":"object","description":"Configuration for content moderation.\n\nRequired when `guardrail.type` is \"moderation\". At least one category must be specified.\n","properties":{"categories":{"type":"array","description":"Content moderation categories to check for.\n\nAt least one category must be specified when using moderation guardrails.\n","items":{"type":"string","enum":["sexual","sexual_minors","hate","hate_threatening","harassment","harassment_threatening","self_harm","self_harm_intent","self_harm_instructions","violence","violence_graphic","illicit","illicit_violent"]}}}}},"required":["type"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this import."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this import was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this import."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this import is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this import expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this import."}}}]},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/imports/{_id}":{"put":{"summary":"Update an import","description":"Updates an existing import with the provided configuration.\nThis is used for major updates to an import's structure or behavior.\n","operationId":"updateImport","tags":["Imports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the import","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Request"}}}},"responses":{"200":{"description":"Import updated successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Response"}}}},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
````

## Delete an import

> Deletes an import. The import is soft-deleted and retained in the recycle bin\
> for 30 days before permanent removal. If the import is currently in use by\
> any flows, those flows may fail until reconfigured.<br>

```json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}},"schemas":{"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}}},"paths":{"/v1/imports/{_id}":{"delete":{"summary":"Delete an import","description":"Deletes an import. The import is soft-deleted and retained in the recycle bin\nfor 30 days before permanent removal. If the import is currently in use by\nany flows, those flows may fail until reconfigured.\n","operationId":"deleteImport","tags":["Imports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the import","required":true,"schema":{"type":"string","format":"objectId"}}],"responses":{"204":{"description":"Import deleted successfully"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
```

## Patch an import

> Partially updates an import using a JSON Patch document (RFC 6902).\
> Only the \`replace\` operation is supported, and only on the following\
> whitelisted path:\
> \
> \| Path | Description |\
> \|------|-------------|\
> \| \`/debugUntil\` | Debug logging expiry (ISO-8601, max 1 hour from now) |\
> \
> All other paths are rejected with \`422\`.

```json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"JsonPatchRequest":{"type":"array","description":"A JSON Patch document (RFC 6902). Send an array of patch\noperations. Only the `replace` operation is supported, and only\non whitelisted fields — all other paths are rejected with 422.","minItems":1,"items":{"$ref":"#/components/schemas/JsonPatchOperation"}},"JsonPatchOperation":{"type":"object","description":"A single JSON Patch operation (RFC 6902).","required":["op","path"],"properties":{"op":{"type":"string","enum":["replace"],"description":"The operation to perform. Only `replace` is supported."},"path":{"type":"string","description":"JSON Pointer (RFC 6901) to the field to patch. Only\nwhitelisted paths are accepted — unlisted paths return\n`422` with `\"<path> is not a whitelisted property\"`."},"value":{"description":"The new value to set at the given path."}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"422-unprocessable-entity":{"description":"Unprocessable entity. The request was well-formed but was unable to be followed due to semantic errors.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/imports/{_id}":{"patch":{"summary":"Patch an import","description":"Partially updates an import using a JSON Patch document (RFC 6902).\nOnly the `replace` operation is supported, and only on the following\nwhitelisted path:\n\n| Path | Description |\n|------|-------------|\n| `/debugUntil` | Debug logging expiry (ISO-8601, max 1 hour from now) |\n\nAll other paths are rejected with `422`.","operationId":"patchImport","tags":["Imports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the import","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/JsonPatchRequest"}}}},"responses":{"204":{"description":"Import patched successfully"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"},"422":{"$ref":"#/components/responses/422-unprocessable-entity"}}}}}}
```

## Clone an import

> Creates a copy of an existing import.\
> Supports optionally remapping referenced connections (via connectionMap).<br>

````json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"CloneRequest":{"type":"object","description":"Request body for cloning an import.","properties":{"name":{"type":"string","description":"Optional name for the cloned resource. If omitted, the server may generate a default clone name."},"connectionMap":{"type":"object","description":"Optional mapping of original connection ids to replacement connection ids.\nKeys are source connection ids on the original resource; values are target connection ids.\n","additionalProperties":{"type":"string"}}},"additionalProperties":true},"CloneResponse":{"description":"Response body for a clone operation. Some clone endpoints return the cloned resource, while others may return a list of related created resources.","oneOf":[{"$ref":"#/components/schemas/Response"},{"type":"array","items":{"type":"object","properties":{"model":{"type":"string","description":"Model name of the created resource (e.g., Flow, Export, Import)."},"_id":{"type":"string","format":"objectId","description":"Unique id of the created resource."},"name":{"type":"string","description":"Optional name of the created resource."}},"required":["_id"]}}]},"Response":{"type":"object","description":"Complete export object as returned by the API","allOf":[{"$ref":"#/components/schemas/Request"},{"$ref":"#/components/schemas/ResourceResponse"},{"$ref":"#/components/schemas/IAResourceResponse"},{"type":"object","properties":{"aiDescription":{"$ref":"#/components/schemas/AIDescription"},"apim":{"$ref":"#/components/schemas/APIM"},"apiIdentifier":{"type":"string","readOnly":true,"description":"API identifier assigned to this import."},"_sourceId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the source resource this import was created from."},"_templateId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the template used to create this import."},"draft":{"type":"boolean","readOnly":true,"description":"Indicates whether this import is in draft state."},"draftExpiresAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp when the draft version of this import expires."},"debugUntil":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp until which debug logging is enabled for this import."}}}]},"Request":{"type":"object","description":"Configuration for import","properties":{"_connectionId":{"type":"string","format":"objectId","description":"A unique identifier representing the specific connection instance within the system. This ID is used to track, manage, and reference the connection throughout its lifecycle, ensuring accurate routing and association of messages or actions to the correct connection context.\n\n**Field behavior**\n- Uniquely identifies a single connection session.\n- Remains consistent for the duration of the connection.\n- Used to correlate requests, responses, and events related to the connection.\n- Typically generated by the system upon connection establishment.\n\n**Implementation guidance**\n- Ensure the connection ID is globally unique within the scope of the application.\n- Use a format that supports easy storage and retrieval, such as UUID or a similar unique string.\n- Avoid exposing sensitive information within the connection ID.\n- Validate the connection ID format on input to prevent injection or misuse.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"conn-9876543210\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- The connection ID should not be reused for different connections.\n- It is critical for maintaining session integrity and security.\n- Should be treated as an opaque value by clients; its internal structure should not be assumed.\n\n**Dependency chain**\n- Generated during connection initialization.\n- Used by session management and message routing components.\n- Referenced in logs, audits, and debugging tools.\n\n**Technical details**\n- Typically implemented as a string data type.\n- May be generated using UUID libraries or custom algorithms.\n- Stored in connection metadata and passed in API headers or payloads as needed."},"_integrationId":{"type":"string","format":"objectId","description":"The unique identifier for the integration instance associated with the current operation or resource. This ID is used to reference and manage the specific integration within the system, ensuring that actions and data are correctly linked to the appropriate integration context.\n\n**Field behavior**\n- Uniquely identifies an integration instance.\n- Used to associate requests or resources with a specific integration.\n- Typically immutable once set to maintain consistent linkage.\n- May be required for operations involving integration-specific data or actions.\n\n**Implementation guidance**\n- Ensure the ID is generated in a globally unique manner to avoid collisions.\n- Validate the format and existence of the integration ID before processing requests.\n- Use this ID to fetch or update integration-related configurations or data.\n- Secure the ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"intg_1234567890abcdef\"\n- \"integration-987654321\"\n- \"a1b2c3d4e5f6g7h8i9j0\"\n\n**Important notes**\n- This ID should not be confused with user IDs or other resource identifiers.\n- It is critical for maintaining the integrity of integration-related workflows.\n- Changes to this ID after creation can lead to inconsistencies or errors.\n\n**Dependency chain**\n- Dependent on the integration creation or registration process.\n- Used by downstream services or components that interact with the integration.\n- May be referenced in logs, audit trails, and monitoring systems.\n\n**Technical details**\n- Typically represented as a string.\n- May follow a specific pattern or prefix to denote integration IDs.\n- Stored in databases or configuration files linked to integration metadata.\n- Used as a key in API calls, database queries, and internal processing logic."},"_connectorId":{"type":"string","format":"objectId","description":"The unique identifier for the connector associated with the resource or operation. This ID is used to reference and manage the specific connector within the system, enabling integration and communication between different components or services.\n\n**Field behavior**\n- Uniquely identifies a connector instance.\n- Used to link resources or operations to a specific connector.\n- Typically immutable once assigned.\n- Required for operations that involve connector-specific actions.\n\n**Implementation guidance**\n- Ensure the connector ID is globally unique within the system.\n- Validate the format and existence of the connector ID before processing.\n- Use consistent naming or ID conventions to facilitate management.\n- Secure the connector ID to prevent unauthorized access or manipulation.\n\n**Examples**\n- \"conn-12345\"\n- \"connector_abcde\"\n- \"uuid-550e8400-e29b-41d4-a716-446655440000\"\n\n**Important notes**\n- The connector ID must correspond to a valid and active connector.\n- Changes to the connector ID may affect linked resources or workflows.\n- Connector IDs are often generated by the system and should not be manually altered.\n\n**Dependency chain**\n- Dependent on the existence of a connector registry or management system.\n- Used by components that require connector-specific configuration or data.\n- May be referenced by authentication, routing, or integration modules.\n\n**Technical details**\n- Typically represented as a string.\n- May follow UUID, GUID, or custom ID formats.\n- Stored and transmitted as part of API requests and responses.\n- Should be indexed in databases for efficient lookup."},"adaptorType":{"type":"string","description":"Specifies the underlying technology adapter that processes this import's operations.\n\nThis field determines:\n- Which connection types are compatible with this import\n- Which API endpoints and protocols will be used\n- Which import-specific configuration objects must be provided\n- The available features and capabilities of the import\n\nThe value must match an available adapter in the system and should correspond to the\nexternal system being accessed. For example:\n- \"HTTPImport\" for generic REST/SOAP APIs\n- \"SalesforceImport\" for Salesforce-specific operations\n- \"NetSuiteDistributedImport\" for NetSuite-specific operations\n- \"FTPImport\" for file transfers via FTP/SFTP\n\nWhen creating an import, this field must be set correctly and cannot be changed afterward\nwithout creating a new import resource.\n\n**Important notes**\n- When using a specific adapter type (e.g., \"SalesforceImport\"), you must also\nprovide the corresponding configuration object (e.g., \"salesforce\").\n- HTTPImport is the default adapter type for all REST/SOAP APIs and should be used unless there is a specialized adapter type for the specific API (such as SalesforceImport for Salesforce and NetSuiteDistributedImport for NetSuite).\n- WrapperImport is used for using a custom adapter built outside of Celigo, and is vary rarely used.\n- RDBMSImport is used for database imports that are built into Celigo such as SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, etc. When using RDBMSImport, populate the \"rdbms\" configuration object.\n- JDBCImport is ONLY for generic JDBC connections that are not one of the built-in RDBMS types. Do NOT use JDBCImport for Snowflake, SQL Server, MySQL, PostgreSQL, or Oracle — use RDBMSImport instead. When using JDBCImport, populate the \"jdbc\" configuration object.\n- NetSuiteDistributedImport is the default for all NetSuite imports. Always use NetSuiteDistributedImport when importing into NetSuite.\n- NetSuiteImport is a legacy adaptor for NetSuite accounts without the Celigo SuiteApp 2.0. Do NOT use NetSuiteImport unless explicitly requested.\n\nThe connection type must be compatible with the adaptorType specified for this import.\nFor example, if adaptorType is \"HTTPImport\", _connectionId must reference a connection with type \"http\".\n","enum":["HTTPImport","FTPImport","AS2Import","S3Import","NetSuiteImport","NetSuiteDistributedImport","SalesforceImport","JDBCImport","RDBMSImport","MongodbImport","DynamodbImport","WrapperImport","AiAgentImport","GuardrailImport","FileSystemImport","ToolImport"]},"externalId":{"type":"string","description":"A unique identifier assigned to an entity by an external system or source, used to reference or correlate the entity across different platforms or services. This ID facilitates integration, synchronization, and data exchange between systems by providing a consistent reference point.\n\n**Field behavior**\n- Must be unique within the context of the external system.\n- Typically immutable once assigned to ensure consistent referencing.\n- Used to link or map records between internal and external systems.\n- May be optional or required depending on integration needs.\n\n**Implementation guidance**\n- Ensure the externalId format aligns with the external system’s specifications.\n- Validate uniqueness to prevent conflicts or data mismatches.\n- Store and transmit the externalId securely to maintain data integrity.\n- Consider indexing this field for efficient lookup and synchronization.\n\n**Examples**\n- \"12345-abcde-67890\"\n- \"EXT-2023-0001\"\n- \"user_987654321\"\n- \"SKU-XYZ-1001\"\n\n**Important notes**\n- The externalId is distinct from internal system identifiers.\n- Changes to the externalId can disrupt data synchronization.\n- Should be treated as a string to accommodate various formats.\n- May include alphanumeric characters, dashes, or underscores.\n\n**Dependency chain**\n- Often linked to integration or synchronization modules.\n- May depend on external system availability and consistency.\n- Used in API calls that involve cross-system data referencing.\n\n**Technical details**\n- Data type: string.\n- Maximum length may vary based on external system constraints.\n- Should support UTF-8 encoding to handle diverse character sets.\n- May require validation against a specific pattern or format."},"as2":{"$ref":"#/components/schemas/As2"},"dynamodb":{"$ref":"#/components/schemas/Dynamodb"},"http":{"$ref":"#/components/schemas/Http"},"ftp":{"$ref":"#/components/schemas/Ftp"},"jdbc":{"$ref":"#/components/schemas/Jdbc"},"mongodb":{"$ref":"#/components/schemas/Mongodb"},"netsuite":{"$ref":"#/components/schemas/NetSuite"},"netsuite_da":{"$ref":"#/components/schemas/NetsuiteDistributed"},"rdbms":{"$ref":"#/components/schemas/Rdbms"},"s3":{"$ref":"#/components/schemas/S3"},"wrapper":{"$ref":"#/components/schemas/Wrapper"},"salesforce":{"$ref":"#/components/schemas/Salesforce"},"file":{"$ref":"#/components/schemas/File"},"filesystem":{"$ref":"#/components/schemas/FileSystem"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"guardrail":{"$ref":"#/components/schemas/Guardrail"},"name":{"type":"string","description":"The name property represents the identifier or title assigned to an entity, object, or resource within the system. It is typically a human-readable string that uniquely distinguishes the item from others in the same context. This property is essential for referencing, searching, and displaying the entity in user interfaces and API responses.\n\n**Field behavior**\n- Must be a non-empty string.\n- Should be unique within its scope to avoid ambiguity.\n- Often used as a primary label in UI components and logs.\n- May have length constraints depending on system requirements.\n- Typically case-sensitive or case-insensitive based on implementation.\n\n**Implementation guidance**\n- Validate the input to ensure it meets length and character requirements.\n- Enforce uniqueness if required by the application context.\n- Sanitize input to prevent injection attacks or invalid characters.\n- Support internationalization by allowing Unicode characters if applicable.\n- Provide clear error messages when validation fails.\n\n**Examples**\n- \"John Doe\"\n- \"Invoice_2024_001\"\n- \"MainServer\"\n- \"Project Phoenix\"\n- \"user@example.com\"\n\n**Important notes**\n- Avoid using reserved keywords or special characters that may interfere with system processing.\n- Consider trimming whitespace from the beginning and end of the string.\n- The name should be meaningful and descriptive to improve usability.\n- Changes to the name might affect references or links in other parts of the system.\n\n**Dependency chain**\n- May be referenced by other properties such as IDs, URLs, or display labels.\n- Could be linked to permissions or access control mechanisms.\n- Often used in search and filter operations within the API.\n\n**Technical details**\n- Data type: string.\n- Encoding: UTF-8 recommended.\n- Maximum length: typically defined by system constraints (e.g., 255 characters).\n- Validation rules: regex patterns may be applied to restrict allowed characters.\n- Storage considerations: indexed for fast lookup if used frequently in queries."},"description":{"type":"string","description":"A detailed textual explanation that describes the purpose, characteristics, or context of the associated entity or item. This field is intended to provide users with clear and comprehensive information to better understand the subject it represents.\n**Field behavior**\n- Accepts plain text or formatted text depending on implementation.\n- Should be concise yet informative, avoiding overly technical jargon unless necessary.\n- May support multi-line input to allow detailed explanations.\n- Typically optional but recommended for clarity.\n\n**Implementation guidance**\n- Ensure the description is user-friendly and accessible to the target audience.\n- Avoid including sensitive or confidential information.\n- Use consistent terminology aligned with the overall documentation or product language.\n- Consider supporting markdown or rich text formatting if applicable.\n\n**Examples**\n- \"This API endpoint retrieves user profile information including name, email, and preferences.\"\n- \"A unique identifier assigned to each transaction for tracking purposes.\"\n- \"Specifies the date and time when the event occurred, formatted in ISO 8601.\"\n\n**Important notes**\n- Keep descriptions up to date with any changes in functionality or behavior.\n- Avoid redundancy with other fields; focus on clarifying the entity’s role or usage.\n- Descriptions should not contain executable code or commands.\n\n**Dependency chain**\n- Often linked to the entity or property it describes.\n- May influence user understanding and correct usage of the associated field.\n- Can be referenced in user guides, API documentation, or UI tooltips.\n\n**Technical details**\n- Typically stored as a string data type.\n- May have length constraints depending on system limitations.\n- Encoding should support UTF-8 to accommodate international characters.\n- Should be sanitized to prevent injection attacks if rendered in UI contexts.","maxLength":10240},"unencrypted":{"type":"object","description":"Indicates whether the data or content is stored or transmitted without encryption, meaning it is in plain text and not protected by any cryptographic methods. This flag helps determine if additional security measures are necessary to safeguard sensitive information.\n\n**Field behavior**\n- Represents a boolean state where `true` means data is unencrypted and `false` means data is encrypted.\n- Used to identify if the data requires encryption before storage or transmission.\n- May influence processing logic related to data security and compliance.\n\n**Implementation guidance**\n- Ensure this field is explicitly set to reflect the actual encryption status of the data.\n- Use this flag to trigger encryption routines or security checks when data is unencrypted.\n- Validate the field to prevent false positives that could lead to security vulnerabilities.\n\n**Examples**\n- `true` indicating a password stored in plain text (not recommended).\n- `false` indicating a file that has been encrypted before upload.\n- `true` for a message sent over an unsecured channel.\n\n**Important notes**\n- Storing or transmitting data with `unencrypted` set to `true` can expose sensitive information to unauthorized access.\n- This field does not perform encryption itself; it only signals the encryption status.\n- Proper handling of unencrypted data is critical for compliance with data protection regulations.\n\n**Dependency chain**\n- Often used in conjunction with fields specifying encryption algorithms or keys.\n- May affect downstream processes such as logging, auditing, or alerting mechanisms.\n- Related to security policies and access control configurations.\n\n**Technical details**\n- Typically represented as a boolean value.\n- Should be checked before performing operations that assume data confidentiality.\n- May be integrated with security frameworks to enforce encryption standards."},"sampleData":{"type":"object","description":"An array or collection of sample data entries used to demonstrate or test the functionality of the API or application. This data serves as example input or output to help users understand the expected format, structure, and content. It can include various data types depending on the context, such as strings, numbers, objects, or nested arrays.\n\n**Field behavior**\n- Contains example data that mimics real-world usage scenarios.\n- Can be used for testing, validation, or demonstration purposes.\n- May be optional or required depending on the API specification.\n- Should be representative of typical data to ensure meaningful examples.\n\n**Implementation guidance**\n- Populate with realistic and relevant sample entries.\n- Ensure data types and structures align with the API schema.\n- Avoid sensitive or personally identifiable information.\n- Update regularly to reflect any changes in the API data model.\n\n**Examples**\n- A list of user profiles with names, emails, and roles.\n- Sample product entries with IDs, descriptions, and prices.\n- Example sensor readings with timestamps and values.\n- Mock transaction records with amounts and statuses.\n\n**Important notes**\n- Sample data is for illustrative purposes and may not reflect actual production data.\n- Should not be used for live processing or decision-making.\n- Ensure compliance with data privacy and security standards when creating samples.\n\n**Dependency chain**\n- Dependent on the data model definitions within the API.\n- May influence or be influenced by validation rules and schema constraints.\n- Often linked to documentation and testing frameworks.\n\n**Technical details**\n- Typically formatted as JSON arrays or objects.\n- Can include nested structures to represent complex data.\n- Size and complexity should be balanced to optimize performance and clarity.\n- May be embedded directly in API responses or provided as separate resources."},"distributed":{"type":"boolean","description":"Indicates whether the import is using a distributed adaptor, such as a NetSuite Distributed Import.\n**Field behavior**\n- Accepts boolean values: true or false.\n- When true, the import is using a distributed adaptor such as a NetSuite Distributed Import.\n- When false, the import is not using a distributed adaptor.\n\n**Examples**\n- true: When the adaptorType is NetSuiteDistributedImport\n- false: In all other cases"},"maxAttempts":{"type":"number","description":"Specifies the maximum number of attempts allowed for a particular operation or action before it is considered failed or terminated. This property helps control retry logic and prevents infinite loops or excessive retries in processes such as authentication, data submission, or task execution.  \n**Field behavior:**  \n- Defines the upper limit on retry attempts for an operation.  \n- Once the maximum attempts are reached, no further retries are made.  \n- Can be used to trigger fallback mechanisms or error handling after exceeding attempts.  \n**Implementation** GUIDANCE:  \n- Set to a positive integer value representing the allowed retry count.  \n- Should be configured based on the operation's criticality and expected failure rates.  \n- Consider exponential backoff or delay strategies in conjunction with maxAttempts.  \n- Validate input to ensure it is a non-negative integer.  \n**Examples:**  \n- 3 (allowing up to three retry attempts)  \n- 5 (permitting five attempts before failure)  \n- 0 (disabling retries entirely)  \n**Important notes:**  \n- Setting this value too high may cause unnecessary resource consumption or delays.  \n- Setting it too low may result in premature failure without sufficient retry opportunities.  \n- Should be used in combination with timeout settings for robust error handling.  \n**Dependency chain:**  \n- Often used alongside properties like retryDelay, timeout, or backoffStrategy.  \n- May influence or be influenced by error handling or circuit breaker configurations.  \n**Technical details:**  \n- Typically represented as an integer data type.  \n- Enforced by the application logic or middleware managing retries.  \n- May be configurable at runtime or deployment time depending on system design."},"ignoreExisting":{"type":"boolean","description":"**CRITICAL FIELD:** Controls whether the import should skip records that already exist in the target system rather than attempting to update them or throwing errors.\n\nWhen set to true, the import will:\n- Check if each record already exists in the target system (using a lookup configured in the import, or if the record coming into the import has a value populated in the field configured in the ignoreExtract property)\n- Skip processing for records that are found to already exist\n- Continue processing only records that don't exist (new records)\n- This is for CREATE operations where you want to avoid duplicates\n\nWhen set to false or undefined (default):\n- All records are processed normally in the creation process (if applicable)\n- Existing records may be duplicated or may cause errors depending on the API\n- No special existence checking is performed\n\n**When to set this to true (MUST detect these PATTERNS)**\n\n**ALWAYS set ignoreExisting to true** when the user's prompt includes ANY of these patterns:\n\n**Primary Patterns (very common)**\n- \"ignore existing\" / \"ignoring existing\" / \"ignore any existing\"\n- \"skip existing\" / \"skipping existing\" / \"skip any existing\"\n- \"while ignoring existing\" / \"while skipping existing\"\n- \"skip duplicates\" / \"avoid duplicates\" / \"ignore duplicates\"\n\n**Secondary Patterns**\n- \"only create new\" / \"only add new\" / \"create new only\"\n- \"don't update existing\" / \"avoid updating existing\" / \"without updating\"\n- \"create if doesn't exist\" / \"add if doesn't exist\"\n- \"insert only\" / \"create only\"\n- \"skip if already exists\" / \"ignore if exists\"\n- \"don't process existing\" / \"bypass existing\"\n\n**Context Clues**\n- User says \"create\" or \"insert\" AND mentions checking/matching against existing data\n- User wants to add records but NOT modify what's already there\n- User is concerned about duplicate prevention during creation\n\n**Examples**\n\n**Should set ignoreExisting: true**\n- \"Create customer records in Shopify while ignoring existing customers\" → ignoreExisting: true\n- \"Create vendor records while ignoring existing vendors\" → ignoreExisting: true\n- \"Import new products only, skip existing ones\" → ignoreExisting: true\n- \"Add orders to the system, ignore any that already exist\" → ignoreExisting: true\n- \"Create accounts but don't update existing ones\" → ignoreExisting: true\n- \"Insert new contacts, skip duplicates\" → ignoreExisting: true\n- \"Create records, skipping any that match existing data\" → ignoreExisting: true\n\n**Should not set ignoreExisting: true**\n- \"Update existing customer records\" → ignoreExisting: false (update operation)\n- \"Create or update records\" → ignoreExisting: false (upsert operation)\n- \"Sync all records\" → ignoreExisting: false (sync/upsert operation)\n\n**Important**\n- This field is typically used with INSERT operations\n- Common pattern: \"Create X while ignoring existing X\" → MUST set to true\n\n**When to leave this false/undefined**\n\nLeave this false or undefined when:\n- The prompt doesn't mention skipping or ignoring existing records\n- The import is only performing updates (no inserts)\n- The prompt says \"sync\", \"update\", or \"upsert\"\n\n**Important notes**\n- This flag requires a properly configured lookup to identify existing records, or have an ignoreExtract field configured to identify the field that is used to determine if the record already exists.\n- The lookup typically checks a unique identifier field (id, email, external_id, etc.)\n- When true, existing records are silently skipped (not updated, not errored)\n- Use this for insert-only or create-only operations\n- Do NOT use this if the user wants to update existing records\n\n**Technical details**\n- Works in conjunction with lookup configurations to determine if a record exists or has an ignoreExtract field configured to determine if the record already exists.\n- The lookup queries the target system before attempting to create the record\n- If the lookup returns a match, the record is skipped\n- If the lookup returns no match, the record is processed normally"},"ignoreMissing":{"type":"boolean","description":"Indicates whether the system should ignore missing or non-existent resources during processing. When set to true, the operation will proceed without error even if some referenced items are not found; when false, the absence of required resources will cause the operation to fail or return an error.  \n**Field behavior:**  \n- Controls error handling related to missing resources.  \n- Allows operations to continue gracefully when some data is unavailable.  \n- Affects the robustness and fault tolerance of the process.  \n**Implementation guidance:**  \n- Use true to enable lenient processing where missing data is acceptable.  \n- Use false to enforce strict validation and fail fast on missing resources.  \n- Ensure that downstream logic can handle partial results if ignoring missing items.  \n**Examples:**  \n- ignoreMissing: true — skips over missing files during batch processing.  \n- ignoreMissing: false — throws an error if a referenced database entry is not found.  \n**Important notes:**  \n- Setting this to true may lead to incomplete results if missing data is critical.  \n- Default behavior should be clearly documented to avoid unexpected failures.  \n- Consider logging or reporting missing items even when ignoring them.  \n**Dependency** CHAIN:  \n- Often used in conjunction with resource identifiers or references.  \n- May impact validation and error reporting modules.  \n**Technical details:**  \n- Typically a boolean flag.  \n- Influences control flow and exception handling mechanisms.  \n- Should be clearly defined in API contracts to manage client expectations."},"idLockTemplate":{"type":"string","description":"The unique identifier for the lock template used to define the configuration and behavior of a lock within the system. This ID links the lock instance to a predefined template that specifies its properties, access rules, and operational parameters.\n\n**Field behavior**\n- Uniquely identifies a lock template within the system.\n- Used to associate a lock instance with its configuration template.\n- Typically immutable once assigned to ensure consistent behavior.\n\n**Implementation guidance**\n- Must be a valid identifier corresponding to an existing lock template.\n- Should be validated against the list of available templates before assignment.\n- Ensure uniqueness to prevent conflicts between different lock configurations.\n\n**Examples**\n- \"template-12345\"\n- \"lockTemplateA1B2C3\"\n- \"LT-987654321\"\n\n**Important notes**\n- Changing the template ID after lock creation may lead to inconsistent lock behavior.\n- The template defines critical lock parameters; ensure the correct template is referenced.\n- This field is essential for lock provisioning and management workflows.\n\n**Dependency chain**\n- Depends on the existence of predefined lock templates in the system.\n- Used by lock management and access control modules to apply configurations.\n\n**Technical details**\n- Typically represented as a string or UUID.\n- Should conform to the system’s identifier format standards.\n- Indexed for efficient lookup and retrieval in the database."},"dataURITemplate":{"type":"string","description":"A template string used to construct data URIs dynamically by embedding variable placeholders that are replaced with actual values at runtime. This template facilitates the generation of standardized data URIs for accessing or referencing resources within the system.\n\n**Field behavior**\n- Accepts a string containing placeholders for variables.\n- Placeholders are replaced with corresponding runtime values to form a complete data URI.\n- Enables dynamic and flexible URI generation based on context or input parameters.\n- Supports standard URI encoding where necessary to ensure valid URI formation.\n\n**Implementation guidance**\n- Define clear syntax for variable placeholders within the template (e.g., using curly braces or another delimiter).\n- Ensure that all required variables are provided at runtime to avoid incomplete URIs.\n- Validate the final constructed URI to confirm it adheres to URI standards.\n- Consider supporting optional variables with default values to increase template flexibility.\n\n**Examples**\n- \"data:text/plain;base64,{base64EncodedData}\"\n- \"data:{mimeType};charset={charset},{data}\"\n- \"data:image/{imageFormat};base64,{encodedImageData}\"\n\n**Important notes**\n- The template must produce a valid data URI after variable substitution.\n- Proper encoding of variable values is critical to prevent malformed URIs.\n- This property is essential for scenarios requiring inline resource embedding or data transmission via URIs.\n- Misconfiguration or missing variables can lead to invalid or unusable URIs.\n\n**Dependency chain**\n- Relies on the availability of runtime variables corresponding to placeholders.\n- May depend on encoding utilities to prepare variable values.\n- Often used in conjunction with data processing or resource generation components.\n\n**Technical details**\n- Typically a UTF-8 encoded string.\n- Supports standard URI schemes and encoding rules.\n- Variable placeholders should be clearly defined and consistently parsed.\n- May integrate with templating engines or string interpolation mechanisms."},"oneToMany":{"$ref":"#/components/schemas/OneToMany"},"pathToMany":{"$ref":"#/components/schemas/PathToMany"},"blobKeyPath":{"type":"string","description":"Specifies the file system path to the blob key, which uniquely identifies the binary large object (blob) within the storage system. This path is used to locate and access the blob data efficiently during processing or retrieval operations.\n**Field behavior**\n- Represents a string path pointing to the location of the blob key.\n- Used as a reference to access the associated blob data.\n- Typically immutable once set to ensure consistent access.\n- May be required for operations involving blob retrieval or manipulation.\n**Implementation guidance**\n- Ensure the path is valid and correctly formatted according to the storage system's conventions.\n- Validate the existence of the blob key at the specified path before processing.\n- Handle cases where the path may be null or empty gracefully.\n- Secure the path to prevent unauthorized access to blob data.\n**Examples**\n- \"/data/blobs/abc123def456\"\n- \"blobstore/keys/2024/06/blobkey789\"\n- \"s3://bucket-name/blob-keys/key123\"\n**Important notes**\n- The blobKeyPath must correspond to an existing blob key in the storage backend.\n- Incorrect or malformed paths can lead to failures in blob retrieval.\n- Access permissions to the path must be properly configured.\n- The format of the path may vary depending on the underlying storage technology.\n**Dependency chain**\n- Dependent on the storage system's directory or key structure.\n- Used by blob retrieval and processing components.\n- May be linked to metadata describing the blob.\n**Technical details**\n- Typically represented as a UTF-8 encoded string.\n- May include hierarchical directory structures or URI schemes.\n- Should be sanitized to prevent injection or path traversal vulnerabilities.\n- Often used as a lookup key in blob storage APIs or databases."},"blob":{"type":"boolean","description":"The binary large object (blob) representing the raw data content to be processed, stored, or transmitted. This field typically contains encoded or serialized data such as images, files, or other multimedia content in a compact binary format.\n\n**Field behavior**\n- Holds the actual binary data payload.\n- Can represent various data types including images, documents, or other file formats.\n- May be encoded in formats like Base64 for safe transmission over text-based protocols.\n- Treated as opaque data by the API, with no interpretation unless specified.\n\n**Implementation guidance**\n- Ensure the blob data is properly encoded (e.g., Base64) if the transport medium requires text-safe encoding.\n- Validate the size and format of the blob to meet API constraints.\n- Handle decoding and encoding consistently on both client and server sides.\n- Use streaming or chunking for very large blobs to optimize performance.\n\n**Examples**\n- A Base64-encoded JPEG image file.\n- A serialized JSON object converted into a binary format.\n- A PDF document encoded as a binary blob.\n- An audio file represented as a binary stream.\n\n**Important notes**\n- The blob content is typically opaque and should not be altered during transmission.\n- Size limits may apply depending on API or transport constraints.\n- Proper encoding and decoding are critical to avoid data corruption.\n- Security considerations should be taken into account when handling binary data.\n\n**Dependency chain**\n- May depend on encoding schemes (e.g., Base64) for safe transmission.\n- Often used in conjunction with metadata fields describing the blob type or format.\n- Requires appropriate content-type headers or descriptors for correct interpretation.\n\n**Technical details**\n- Usually represented as a byte array or Base64-encoded string in JSON APIs.\n- May require MIME type specification to indicate the nature of the data.\n- Handling may involve buffer management and memory considerations.\n- Supports binary-safe transport mechanisms to preserve data integrity."},"assistant":{"type":"string","description":"Specifies the configuration and behavior settings for the AI assistant that will interact with the user. This property defines how the assistant responds, including its personality, knowledge scope, response style, and any special instructions or constraints that guide its operation.\n\n**Field behavior**\n- Determines the assistant's tone, style, and manner of communication.\n- Controls the knowledge base or data sources the assistant can access.\n- Enables customization of the assistant's capabilities and limitations.\n- May include parameters for language, verbosity, and response format.\n\n**Implementation guidance**\n- Ensure the assistant configuration aligns with the intended user experience.\n- Validate that all required sub-properties within the assistant configuration are correctly set.\n- Support dynamic updates to the assistant settings to adapt to different contexts or user needs.\n- Provide defaults for unspecified settings to maintain consistent behavior.\n\n**Examples**\n- Setting the assistant to a formal tone with technical expertise.\n- Configuring the assistant to provide concise answers with references.\n- Defining the assistant to operate within a specific domain, such as healthcare or finance.\n- Enabling multi-language support for the assistant responses.\n\n**Important notes**\n- Changes to the assistant property can significantly affect user interaction quality.\n- Properly securing and validating assistant configurations is critical to prevent misuse.\n- The assistant's behavior should comply with ethical guidelines and privacy regulations.\n- Overly restrictive settings may limit the assistant's usefulness, while too broad settings may reduce relevance.\n\n**Dependency chain**\n- May depend on user preferences or session context.\n- Interacts with the underlying AI model and its capabilities.\n- Influences downstream processing of user inputs and outputs.\n- Can be linked to external knowledge bases or APIs for enhanced responses.\n\n**Technical details**\n- Typically structured as an object containing multiple nested properties.\n- May include fields such as personality traits, knowledge cutoff dates, and response constraints.\n- Supports serialization and deserialization for API communication.\n- Requires compatibility with the AI platform's configuration schema."},"deleteAfterImport":{"type":"boolean","description":"Indicates whether the source file should be deleted automatically after the import process completes successfully. This boolean flag helps manage storage by removing files that are no longer needed once their data has been ingested.\n\n**Field behavior**\n- When set to true, the system deletes the source file immediately after a successful import.\n- When set to false or omitted, the source file remains intact after import.\n- Deletion only occurs if the import process completes without errors.\n\n**Implementation guidance**\n- Validate that the import was successful before deleting the file.\n- Ensure proper permissions are in place to delete the file.\n- Consider logging deletion actions for audit purposes.\n- Provide clear user documentation about the implications of enabling this flag.\n\n**Examples**\n- true: The file \"data.csv\" will be removed after import.\n- false: The file \"data.csv\" will remain after import for manual review or backup.\n\n**Important notes**\n- Enabling this flag may result in data loss if the file is needed for re-import or troubleshooting.\n- Use with caution in environments where file retention policies are strict.\n- This flag does not affect files that fail to import.\n\n**Dependency chain**\n- Depends on the successful completion of the import operation.\n- May interact with file system permissions and cleanup routines.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Triggers a file deletion API or system call post-import.\n- Should handle exceptions gracefully to avoid orphaned files or unintended deletions."},"assistantMetadata":{"type":"object","description":"Metadata containing additional information about the assistant, such as configuration details, versioning, capabilities, and any custom attributes that help in managing or identifying the assistant's behavior and context. This metadata supports enhanced control, monitoring, and customization of the assistant's interactions.\n\n**Field behavior**\n- Holds supplementary data that describes or configures the assistant.\n- Can include static or dynamic information relevant to the assistant's operation.\n- Used to influence or inform the assistant's responses and functionality.\n- May be updated or extended to reflect changes in the assistant's capabilities or environment.\n\n**Implementation guidance**\n- Structure metadata in a clear, consistent format (e.g., key-value pairs).\n- Ensure sensitive information is not exposed through metadata.\n- Use metadata to enable feature toggles, version tracking, or context awareness.\n- Validate metadata content to maintain integrity and compatibility.\n\n**Examples**\n- Version number of the assistant (e.g., \"v1.2.3\").\n- Supported languages or locales.\n- Enabled features or modules.\n- Custom tags indicating the assistant's domain or purpose.\n\n**Important notes**\n- Metadata should not contain user-specific or sensitive personal data.\n- Keep metadata concise to avoid performance overhead.\n- Changes in metadata may affect assistant behavior; manage updates carefully.\n- Metadata is primarily for internal use and may not be exposed to end users.\n\n**Dependency chain**\n- May depend on assistant configuration settings.\n- Can influence or be referenced by response generation logic.\n- Interacts with monitoring and analytics systems for tracking.\n\n**Technical details**\n- Typically represented as a JSON object or similar structured data.\n- Should be easily extensible to accommodate future attributes.\n- Must be compatible with the overall assistant API schema.\n- Should support serialization and deserialization without data loss."},"useTechAdaptorForm":{"type":"boolean","description":"Indicates whether the technical adaptor form should be utilized in the current process or workflow. This boolean flag determines if the system should enable and display the technical adaptor form interface for user interaction or automated processing.\n\n**Field behavior**\n- When set to true, the technical adaptor form is activated and presented to the user or system.\n- When set to false, the technical adaptor form is bypassed or hidden.\n- Influences the flow of data input and validation related to technical adaptor configurations.\n\n**Implementation guidance**\n- Default to false if the technical adaptor form is optional or not always required.\n- Ensure that enabling this flag triggers all necessary UI components and backend processes related to the technical adaptor form.\n- Validate that dependent fields or modules respond appropriately when this flag changes state.\n\n**Examples**\n- true: The system displays the technical adaptor form for configuration.\n- false: The system skips the technical adaptor form and proceeds without it.\n\n**Important notes**\n- Changing this flag may affect downstream processing and data integrity.\n- Must be synchronized with user permissions and feature availability.\n- Should be clearly documented in user guides if exposed in UI settings.\n\n**Dependency chain**\n- May depend on user roles or feature toggles enabling technical adaptor functionality.\n- Could influence or be influenced by other configuration flags related to adaptor workflows.\n\n**Technical details**\n- Typically implemented as a boolean data type.\n- Used in conditional logic to control UI rendering and backend processing paths.\n- May trigger event listeners or hooks when its value changes."},"distributedAdaptorData":{"type":"object","description":"Contains configuration and operational data specific to distributed adaptors used within the system. This data includes parameters, status information, and metadata necessary for managing and monitoring distributed adaptor instances across different nodes or environments. It facilitates synchronization, performance tuning, and error handling for adaptors operating in a distributed architecture.\n\n**Field behavior**\n- Holds detailed information about each distributed adaptor instance.\n- Supports dynamic updates to reflect real-time adaptor status and configuration changes.\n- Enables centralized management and monitoring of distributed adaptors.\n- May include nested structures representing various adaptor attributes and metrics.\n\n**Implementation guidance**\n- Ensure data consistency and integrity when updating distributedAdaptorData.\n- Use appropriate serialization formats to handle complex nested data.\n- Implement validation to verify the correctness of adaptor configuration parameters.\n- Design for scalability to accommodate varying numbers of distributed adaptors.\n\n**Examples**\n- Configuration settings for a distributed database adaptor including connection strings and timeout values.\n- Status reports indicating the health and performance metrics of adaptors deployed across multiple servers.\n- Metadata describing adaptor version, deployment environment, and last synchronization timestamp.\n\n**Important notes**\n- This property is critical for systems relying on distributed adaptors to function correctly.\n- Changes to this data should be carefully managed to avoid disrupting adaptor operations.\n- Sensitive information within adaptor data should be secured and access-controlled.\n\n**Dependency chain**\n- Dependent on the overall system configuration and deployment topology.\n- Interacts with monitoring and management modules that consume adaptor data.\n- May influence load balancing and failover mechanisms within the distributed system.\n\n**Technical details**\n- Typically represented as a complex object or map with key-value pairs.\n- May include arrays or lists to represent multiple adaptor instances.\n- Requires efficient parsing and serialization to minimize performance overhead.\n- Should support versioning to handle changes in adaptor data schema over time."},"filter":{"allOf":[{"description":"Filter configuration for selectively processing records during import operations.\n\n**Import-specific behavior**\n\n**Pre-Import Filtering**: Filters are applied to incoming records before they are sent to the destination system. Records that don't match the filter criteria are silently dropped and not imported.\n\n**Available filter fields**\n\nThe fields available for filtering are the data fields from each record being imported.\n"},{"$ref":"#/components/schemas/Filter"}]},"traceKeyTemplate":{"type":"string","description":"A template string used to generate unique trace keys for tracking and correlating requests across distributed systems. This template supports placeholders that can be dynamically replaced with runtime values such as timestamps, unique identifiers, or contextual metadata to create meaningful and consistent trace keys.\n\n**Field behavior**\n- Defines the format and structure of trace keys used in logging and monitoring.\n- Supports dynamic insertion of variables or placeholders to customize trace keys per request.\n- Ensures trace keys are unique and consistent for effective traceability.\n- Can be used to correlate logs, metrics, and traces across different services.\n\n**Implementation guidance**\n- Use clear and descriptive placeholders that map to relevant runtime data (e.g., {timestamp}, {requestId}).\n- Ensure the template produces trace keys that are unique enough to avoid collisions.\n- Validate the template syntax before applying it to avoid runtime errors.\n- Consider including service names or environment identifiers for multi-service deployments.\n- Keep the template concise to avoid excessively long trace keys.\n\n**Examples**\n- \"trace-{timestamp}-{requestId}\"\n- \"{serviceName}-{env}-{uniqueId}\"\n- \"traceKey-{userId}-{sessionId}-{epochMillis}\"\n\n**Important notes**\n- The template must be compatible with the tracing and logging infrastructure in use.\n- Placeholders should be well-defined and documented to ensure consistent usage.\n- Improper templates may lead to trace key collisions or loss of traceability.\n- Avoid sensitive information in trace keys to maintain security and privacy.\n\n**Dependency chain**\n- Relies on runtime context or metadata to replace placeholders.\n- Integrates with logging, monitoring, and tracing systems that consume trace keys.\n- May depend on unique identifier generators or timestamp providers.\n\n**Technical details**\n- Typically implemented as a string with placeholder syntax (e.g., curly braces).\n- Requires a parser or formatter to replace placeholders with actual values at runtime.\n- Should support extensibility to add new placeholders as needed.\n- May need to conform to character restrictions imposed by downstream systems."},"mockResponse":{"type":"object","description":"Defines a predefined response that the system will return when the associated API endpoint is invoked, allowing for simulation of real API behavior without requiring actual backend processing. This is particularly useful for testing, development, and demonstration purposes where consistent and controlled responses are needed.\n\n**Field behavior**\n- Returns the specified mock data instead of executing the real API logic.\n- Can simulate various response scenarios including success, error, and edge cases.\n- Overrides the default response when enabled.\n- Supports static or dynamic content depending on implementation.\n\n**Implementation guidance**\n- Ensure the mock response format matches the expected API response schema.\n- Use realistic data to closely mimic actual API behavior.\n- Update mock responses regularly to reflect changes in the real API.\n- Consider including headers, status codes, and body content in the mock response.\n- Provide clear documentation on when and how the mock response is used.\n\n**Examples**\n- Returning a fixed JSON object representing a user profile.\n- Simulating a 404 error response with an error message.\n- Providing a list of items with pagination metadata.\n- Mocking a successful transaction confirmation message.\n\n**Important notes**\n- Mock responses should not be used in production environments unless explicitly intended.\n- They are primarily for development, testing, and demonstration.\n- Overuse of mock responses can mask issues in the real API implementation.\n- Ensure that consumers of the API are aware when mock responses are active.\n\n**Dependency chain**\n- Dependent on the API endpoint configuration.\n- May interact with request parameters to determine the appropriate mock response.\n- Can be linked with feature flags or environment settings to toggle mock behavior.\n\n**Technical details**\n- Typically implemented as static JSON or XML payloads.\n- May include HTTP status codes and headers.\n- Can be integrated with API gateways or mocking tools.\n- Supports conditional logic in advanced implementations to vary responses."},"_ediProfileId":{"type":"string","format":"objectId","description":"The unique identifier for the Electronic Data Interchange (EDI) profile associated with the transaction or entity. This ID links the current data to a specific EDI configuration that defines the format, protocols, and trading partner details used for electronic communication.\n\n**Field behavior**\n- Uniquely identifies an EDI profile within the system.\n- Used to retrieve or reference EDI settings and parameters.\n- Must be consistent and valid to ensure proper EDI processing.\n- Typically immutable once assigned to maintain data integrity.\n\n**Implementation guidance**\n- Ensure the ID corresponds to an existing and active EDI profile in the system.\n- Validate the format and existence of the ID before processing transactions.\n- Use this ID to fetch EDI-specific rules, mappings, and partner information.\n- Secure the ID to prevent unauthorized changes or misuse.\n\n**Examples**\n- \"EDI12345\"\n- \"PROF-67890\"\n- \"X12_PROFILE_001\"\n\n**Important notes**\n- This ID is critical for routing and formatting EDI messages correctly.\n- Incorrect or missing IDs can lead to failed or misrouted EDI transmissions.\n- The ID should be managed centrally to avoid duplication or conflicts.\n\n**Dependency chain**\n- Depends on the existence of predefined EDI profiles in the system.\n- Used by EDI processing modules to apply correct communication standards.\n- May be linked to trading partner configurations and document types.\n\n**Technical details**\n- Typically a string or alphanumeric value.\n- Stored in a database with indexing for quick lookup.\n- May follow a naming convention defined by the organization or EDI standards.\n- Used as a foreign key in relational data models involving EDI transactions."},"parsers":{"type":["string","null"],"enum":["1"],"description":"A collection of parser configurations that define how input data should be interpreted and processed. Each parser specifies rules, patterns, or formats to extract meaningful information from raw data sources, enabling the system to handle diverse data types and structures effectively. This property allows customization and extension of parsing capabilities to accommodate various input formats.\n\n**Field behavior**\n- Accepts multiple parser definitions as an array or list.\n- Each parser operates independently to process specific data formats.\n- Parsers are applied in the order they are defined unless otherwise specified.\n- Supports enabling or disabling individual parsers dynamically.\n- Can include built-in or custom parser implementations.\n\n**Implementation guidance**\n- Ensure parsers are well-defined with clear matching criteria and extraction rules.\n- Validate parser configurations to prevent conflicts or overlaps.\n- Provide mechanisms to add, update, or remove parsers without downtime.\n- Support extensibility to integrate new parsing logic as needed.\n- Document each parser’s purpose and expected input/output formats.\n\n**Examples**\n- A JSON parser that extracts fields from JSON-formatted input.\n- A CSV parser that splits input lines into columns based on delimiters.\n- A regex-based parser that identifies patterns within unstructured text.\n- An XML parser that navigates hierarchical data structures.\n- A custom parser designed to interpret proprietary log file formats.\n\n**Important notes**\n- Incorrect parser configurations can lead to data misinterpretation or processing errors.\n- Parsers should be optimized for performance to handle large volumes of data efficiently.\n- Consider security implications when parsing untrusted input to avoid injection attacks.\n- Testing parsers with representative sample data is crucial for reliability.\n- Parsers may depend on external libraries or modules; ensure compatibility.\n\n**Dependency chain**\n- Relies on input data being available and accessible.\n- May depend on schema definitions or data contracts for accurate parsing.\n- Interacts with downstream components that consume parsed output.\n- Can be influenced by global settings such as character encoding or locale.\n- May require synchronization with data validation and transformation steps.\n\n**Technical details**\n- Typically implemented as modular components or plugins.\n- Configurations may include pattern definitions, field mappings, and error handling rules.\n- Supports various data formats including text, binary, and structured documents.\n- May expose APIs or interfaces for runtime configuration and monitoring.\n- Often integrated with logging and debugging tools to trace parsing operations."},"hooks":{"type":"object","properties":{"preMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the pre-mapping hook phase. This function allows for custom logic to be applied before the main mapping process begins, enabling data transformation, validation, or any preparatory steps required. It should be defined in a way that it can be invoked with the appropriate context and input data.\n\n**Field behavior**\n- Executed before the mapping process starts.\n- Receives input data and context for processing.\n- Can modify or validate data prior to mapping.\n- Should return the processed data or relevant output for the next stage.\n\n**Implementation guidance**\n- Define the function with clear input parameters and expected output.\n- Ensure the function handles errors gracefully to avoid interrupting the mapping flow.\n- Keep the function focused on pre-mapping concerns only.\n- Test the function independently to verify its behavior before integration.\n\n**Examples**\n- A function that normalizes input data formats.\n- A function that filters out invalid entries.\n- A function that enriches data with additional attributes.\n- A function that logs input data for auditing purposes.\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous operations.\n- Avoid side effects that could impact other parts of the mapping process.\n- Ensure compatibility with the overall data pipeline and mapping framework.\n- Document the function’s purpose and usage clearly for maintainability.\n\n**Dependency chain**\n- Invoked after the preMap hook is triggered.\n- Outputs data consumed by the subsequent mapping logic.\n- May depend on external utilities or services for data processing.\n\n**Technical details**\n- Typically implemented as a JavaScript or TypeScript function.\n- Receives parameters such as input data object and context metadata.\n- Returns a transformed data object or a promise resolving to it.\n- Integrated into the mapping engine’s hook system for execution timing."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed during the pre-mapping hook phase. This ID is used to reference and invoke the specific script that performs custom logic or transformations before the main mapping process begins.\n**Field behavior**\n- Specifies which script is triggered before the mapping operation.\n- Must correspond to a valid and existing script within the system.\n- Determines the custom pre-processing logic applied to data.\n**Implementation guidance**\n- Ensure the script ID is correctly registered and accessible in the environment.\n- Validate the script's compatibility with the preMap hook context.\n- Use consistent naming conventions for script IDs to avoid conflicts.\n**Examples**\n- \"script12345\"\n- \"preMapTransform_v2\"\n- \"hookScript_001\"\n**Important notes**\n- An invalid or missing script ID will result in the preMap hook being skipped or causing an error.\n- The script associated with this ID should be optimized for performance to avoid delays.\n- Permissions must be set appropriately to allow execution of the referenced script.\n**Dependency chain**\n- Depends on the existence of the script repository or script management system.\n- Linked to the preMap hook lifecycle event in the mapping process.\n- May interact with input data and influence subsequent mapping steps.\n**Technical details**\n- Typically represented as a string identifier.\n- Used internally by the system to locate and execute the script.\n- May be stored in a database or configuration file referencing available scripts."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the current operation or context. This ID is used to reference and manage the specific stack within the system, ensuring that all related resources and actions are correctly linked.  \n**Field behavior:**  \n- Uniquely identifies a stack instance within the environment.  \n- Used internally to track and manage stack-related operations.  \n- Typically immutable once assigned for a given stack lifecycle.  \n**Implementation guidance:**  \n- Should be generated or assigned by the system managing the stacks.  \n- Must be validated to ensure uniqueness within the scope of the application.  \n- Should be securely stored and transmitted to prevent unauthorized access or manipulation.  \n**Examples:**  \n- \"stack-12345abcde\"  \n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/50d6f0a0-1234-11e8-9a8b-0a1234567890\"  \n- \"projX-envY-stackZ\"  \n**Important notes:**  \n- This ID is critical for correlating resources and operations to the correct stack.  \n- Changing the stack ID after creation can lead to inconsistencies and errors.  \n- Should not be confused with other identifiers like resource IDs or deployment IDs.  \n**Dependency** CHAIN:  \n- Depends on the stack creation or registration process to obtain the ID.  \n- Used by hooks, deployment scripts, and resource managers to reference the stack.  \n**Technical details:**  \n- Typically a string value, possibly following a specific format or pattern.  \n- May include alphanumeric characters, dashes, and colons depending on the system.  \n- Often used as a key in databases or APIs to retrieve stack information."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the preMap hook, allowing customization of its execution prior to the mapping process. This property enables users to specify options such as input validation rules, transformation parameters, or conditional logic that tailor the hook's operation to specific requirements.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the preMap hook.\n- Influences how the preMap hook processes data before mapping occurs.\n- Can enable or disable specific features or validations within the hook.\n- Supports dynamic adjustment of hook behavior based on provided settings.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema to ensure correctness.\n- Provide default values for optional settings to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n- Allow extensibility to accommodate future configuration parameters without breaking compatibility.\n\n**Examples**\n- Setting a flag to enable strict input validation before mapping.\n- Specifying a list of fields to exclude from the mapping process.\n- Defining transformation rules such as trimming whitespace or converting case.\n- Providing conditional logic parameters to skip mapping under certain conditions.\n\n**Important notes**\n- Incorrect configuration values may cause the preMap hook to fail or behave unexpectedly.\n- Configuration should be tested thoroughly to ensure it aligns with the intended data processing flow.\n- Changes to configuration may affect downstream processes relying on the mapped data.\n- Sensitive information should not be included in the configuration to avoid security risks.\n\n**Dependency chain**\n- Depends on the preMap hook being enabled and invoked during the processing pipeline.\n- Influences the input data that will be passed to subsequent mapping stages.\n- May interact with other hook configurations or global settings affecting data transformation.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and applied at runtime before the mapping logic executes.\n- Supports nested configuration properties for complex customization.\n- Should be immutable during the execution of the preMap hook to ensure consistency."}},"description":"A hook function that is executed before the mapping process begins. This function allows for custom preprocessing or transformation of data prior to the main mapping logic being applied. It is typically used to modify input data, validate conditions, or set up necessary context for the mapping operation.\n\n**Field behavior**\n- Invoked once before the mapping starts.\n- Receives the raw input data or context.\n- Can modify or replace the input data before mapping.\n- Can abort or alter the flow based on custom logic.\n\n**Implementation guidance**\n- Implement as a synchronous or asynchronous function depending on the environment.\n- Ensure it returns the modified data or context for the mapping to proceed.\n- Handle errors gracefully to avoid disrupting the mapping process.\n- Use for data normalization, enrichment, or validation before mapping.\n\n**Examples**\n- Sanitizing input strings to remove unwanted characters.\n- Adding default values to missing fields.\n- Logging or auditing input data before transformation.\n- Validating input schema and throwing errors if invalid.\n\n**Important notes**\n- This hook is optional; if not provided, mapping proceeds with original input.\n- Changes made here directly affect the mapping outcome.\n- Avoid heavy computations to prevent performance bottlenecks.\n- Should not perform side effects that depend on mapping results.\n\n**Dependency chain**\n- Executed before the main mapping function.\n- Influences the data passed to subsequent mapping hooks or steps.\n- May affect downstream validation or output generation.\n\n**Technical details**\n- Typically implemented as a function with signature (inputData, context) => modifiedData.\n- Supports both synchronous return values and Promises for async operations.\n- Integrated into the mapping pipeline as the first step.\n- Can be configured or replaced depending on mapping framework capabilities."},"postMap":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed during the post-map hook phase. This function is invoked after the mapping process completes, allowing for custom processing, validation, or transformation of the mapped data. It should be defined within the appropriate scope and adhere to the expected signature for post-map hook functions.\n\n**Field behavior**\n- Defines the callback function to run after the mapping operation.\n- Enables customization and extension of the mapping logic.\n- Executes only once per mapping cycle during the post-map phase.\n- Can modify or augment the mapped data before final output.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function in the execution context.\n- The function should handle any necessary error checking and data validation.\n- Avoid long-running or blocking operations within the function to maintain performance.\n- Document the function’s expected inputs and outputs clearly.\n\n**Examples**\n- \"validateMappedData\"\n- \"transformPostMapping\"\n- \"logMappingResults\"\n\n**Important notes**\n- The function must be synchronous or properly handle asynchronous behavior if supported.\n- If the function is not defined or invalid, the post-map hook will be skipped without error.\n- This hook is optional but recommended for complex mapping scenarios requiring additional processing.\n\n**Dependency chain**\n- Depends on the mapping process completing successfully.\n- May rely on data structures produced by the mapping phase.\n- Should be compatible with other hooks or lifecycle events in the mapping pipeline.\n\n**Technical details**\n- Expected to be a string representing the function name.\n- The function should accept parameters as defined by the hook interface.\n- Execution context must have access to the function at runtime.\n- Errors thrown within the function may affect the overall mapping operation depending on error handling policies."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier for the script to be executed in the postMap hook. This ID is used to reference and invoke the specific script after the mapping process completes, enabling custom logic or transformations to be applied.  \n**Field behavior:**  \n- Accepts a string representing the script's unique ID.  \n- Must correspond to an existing script within the system.  \n- Triggers the execution of the associated script after the mapping operation finishes.  \n**Implementation guidance:**  \n- Ensure the script ID is valid and the script is properly deployed before referencing it here.  \n- Use this field to extend or customize the behavior of the mapping process via scripting.  \n- Validate the script ID to prevent runtime errors during hook execution.  \n**Examples:**  \n- \"script_12345\"  \n- \"postMapTransformScript\"  \n- \"customHookScript_v2\"  \n**Important notes:**  \n- The script referenced must be compatible with the postMap hook context.  \n- Incorrect or missing script IDs will result in the hook not executing as intended.  \n- This field is optional if no post-mapping script execution is required.  \n**Dependency chain:**  \n- Depends on the existence of the script repository or script management system.  \n- Relies on the postMap hook lifecycle event to trigger execution.  \n**Technical details:**  \n- Typically a string data type.  \n- Used internally to fetch and execute the script logic.  \n- May require permissions or access rights to execute the referenced script."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-map hook. This ID is used to reference and manage the specific stack instance within the system, enabling precise tracking and manipulation of resources or configurations tied to that stack during post-processing operations.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used during post-map hook execution to associate actions with the correct stack.\n- Immutable once set to ensure consistent reference throughout the lifecycle.\n- Required for operations that involve stack-specific context or resource management.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be validated for format and existence before use.\n- Ensure secure handling to prevent unauthorized access or manipulation.\n- Typically generated by the system and passed to hooks automatically.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for linking hook operations to the correct stack context.\n- Incorrect or missing _stackId can lead to failed or misapplied post-map operations.\n- Should not be altered manually once assigned.\n\n**Dependency chain**\n- Depends on the stack creation or registration process that generates the ID.\n- Used by post-map hooks to perform stack-specific logic.\n- May influence downstream processes that rely on stack context.\n\n**Technical details**\n- Typically a string value.\n- Format may vary depending on the system's stack naming conventions.\n- Stored and transmitted securely to maintain integrity.\n- Used as a key in database queries or API calls related to stack management."},"configuration":{"type":"object","description":"Configuration settings that define the behavior and parameters for the postMap hook. This object allows customization of the hook's execution by specifying various options and values that control its operation.\n\n**Field behavior**\n- Accepts a structured object containing key-value pairs relevant to the postMap hook.\n- Determines how the postMap hook processes data after mapping.\n- Can include flags, thresholds, or other parameters influencing hook logic.\n- Optional or required depending on the specific implementation of the postMap hook.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema before use.\n- Ensure all required fields within the configuration are present and correctly typed.\n- Use default values for any missing optional parameters to maintain consistent behavior.\n- Document all configurable options clearly for users to understand their impact.\n\n**Examples**\n- `{ \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"timeout\": 5000, \"retryOnFailure\": false }`\n- `{ \"mappingStrategy\": \"strict\", \"caseSensitive\": true }`\n\n**Important notes**\n- Incorrect configuration values may cause the postMap hook to fail or behave unexpectedly.\n- Configuration should be tailored to the specific needs of the postMap operation.\n- Changes to configuration may require restarting or reinitializing the hook process.\n\n**Dependency chain**\n- Depends on the postMap hook being invoked.\n- May influence downstream processes that rely on the output of the postMap hook.\n- Interacts with other hook configurations if multiple hooks are chained.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Parsed and validated at runtime before the hook executes.\n- Supports nested objects and arrays if the hook logic requires complex configurations."}},"description":"A hook function that is executed after the mapping process is completed. This function allows for custom logic to be applied to the mapped data, enabling modifications, validations, or side effects based on the results of the mapping operation. It receives the mapped data as input and can return a modified version or perform asynchronous operations.\n\n**Field behavior**\n- Invoked immediately after the mapping process finishes.\n- Receives the output of the mapping as its input parameter.\n- Can modify, augment, or validate the mapped data.\n- Supports synchronous or asynchronous execution.\n- The returned value from this hook replaces or updates the mapped data.\n\n**Implementation guidance**\n- Implement as a function that accepts the mapped data object.\n- Ensure proper handling of asynchronous operations if needed.\n- Use this hook to enforce business rules or data transformations post-mapping.\n- Avoid side effects that could interfere with subsequent processing unless intentional.\n- Validate the integrity of the data before returning it.\n\n**Examples**\n- Adjusting date formats or normalizing strings after mapping.\n- Adding computed properties based on mapped fields.\n- Logging or auditing mapped data for monitoring purposes.\n- Filtering out unwanted fields or entries from the mapped result.\n- Triggering notifications or events based on mapped data content.\n\n**Important notes**\n- This hook is optional and only invoked if defined.\n- Errors thrown within this hook may affect the overall mapping operation.\n- The hook should be performant to avoid slowing down the mapping pipeline.\n- Returned data must conform to expected schema to prevent downstream errors.\n\n**Dependency chain**\n- Depends on the completion of the primary mapping function.\n- May influence subsequent processing steps that consume the mapped data.\n- Should be coordinated with pre-mapping hooks to maintain data consistency.\n\n**Technical details**\n- Typically implemented as a callback or promise-returning function.\n- Receives one argument: the mapped data object.\n- Returns either the modified data object or a promise resolving to it.\n- Integrated into the mapping lifecycle after the main mapping logic completes."},"postSubmit":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the name of the function to be executed after the submission process completes. This function is typically used to perform post-submission tasks such as data processing, notifications, or cleanup operations.\n\n**Field behavior**\n- Invoked automatically after the main submission event finishes.\n- Can trigger additional workflows or side effects based on submission results.\n- Supports asynchronous execution depending on the implementation.\n\n**Implementation guidance**\n- Ensure the function name corresponds to a valid, accessible function within the execution context.\n- Validate that the function handles errors gracefully to avoid disrupting the overall submission flow.\n- Document the expected input parameters and output behavior of the function for maintainability.\n\n**Examples**\n- \"sendConfirmationEmail\"\n- \"logSubmissionData\"\n- \"updateUserStatus\"\n\n**Important notes**\n- The function must be idempotent if the submission process can be retried.\n- Avoid long-running operations within this function to prevent blocking the submission pipeline.\n- Security considerations should be taken into account, especially if the function interacts with external systems.\n\n**Dependency chain**\n- Depends on the successful completion of the submission event.\n- May rely on data generated or modified during the submission process.\n- Could trigger downstream hooks or events based on its execution.\n\n**Technical details**\n- Typically referenced by name as a string.\n- Execution context must have access to the function definition.\n- May support both synchronous and asynchronous invocation patterns depending on the platform."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the submission process completes. This ID links the post-submit hook to a specific script that performs additional operations or custom logic once the main submission workflow has finished.\n\n**Field behavior**\n- Specifies which script is triggered automatically after the submission event.\n- Must correspond to a valid and existing script within the system.\n- Controls post-processing actions such as notifications, data transformations, or logging.\n\n**Implementation guidance**\n- Ensure the script ID is correctly referenced and the script is deployed before assigning this property.\n- Validate the script’s permissions and runtime environment compatibility.\n- Use consistent naming or ID conventions to avoid conflicts or errors.\n\n**Examples**\n- \"script_12345\"\n- \"postSubmitCleanupScript\"\n- \"notifyUserScript\"\n\n**Important notes**\n- If the script ID is invalid or missing, the post-submit hook will not execute.\n- Changes to the script ID require redeployment or configuration updates.\n- The script should be idempotent and handle errors gracefully to avoid disrupting the submission flow.\n\n**Dependency chain**\n- Depends on the existence of the script resource identified by this ID.\n- Linked to the post-submit hook configuration within the submission workflow.\n- May interact with other hooks or system components triggered after submission.\n\n**Technical details**\n- Typically represented as a string identifier.\n- Must be unique within the scope of available scripts.\n- Used by the system to locate and invoke the corresponding script at runtime."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-submit hook. This ID is used to reference and manage the specific stack instance within the system, enabling operations such as updates, deletions, or status checks after a submission event.\n\n**Field behavior**\n- Uniquely identifies a stack within the system.\n- Used to link post-submit actions to the correct stack.\n- Typically immutable once assigned to ensure consistent referencing.\n- Required for executing stack-specific post-submit logic.\n\n**Implementation guidance**\n- Ensure the ID is generated following the system’s unique identification standards.\n- Validate the presence and format of the stack ID before processing post-submit hooks.\n- Use this ID to fetch or manipulate stack-related data during post-submit operations.\n- Handle cases where the stack ID may not exist or is invalid gracefully.\n\n**Examples**\n- \"stack-12345\"\n- \"arn:aws:cloudformation:us-east-1:123456789012:stack/my-stack/abcde12345\"\n- \"proj-stack-v2\"\n\n**Important notes**\n- This ID must correspond to an existing stack within the environment.\n- Incorrect or missing stack IDs can cause post-submit hooks to fail.\n- The stack ID is critical for audit trails and debugging post-submit processes.\n\n**Dependency chain**\n- Depends on the stack creation or registration process to generate the ID.\n- Used by post-submit hook handlers to identify the target stack.\n- May be referenced by logging, monitoring, or notification systems post-submission.\n\n**Technical details**\n- Typically a string value.\n- May follow a specific format such as UUID, ARN, or custom naming conventions.\n- Stored and transmitted securely to prevent unauthorized access or manipulation.\n- Should be indexed in databases for efficient retrieval during post-submit operations."},"configuration":{"type":"object","description":"Configuration settings for the post-submit hook that define its behavior and parameters. This object contains key-value pairs specifying how the hook should operate after a submission event, including any necessary options, flags, or environment variables.\n\n**Field behavior**\n- Determines the specific actions and parameters used by the post-submit hook.\n- Can include nested settings tailored to the hook’s functionality.\n- Modifies the execution flow or output based on provided configuration values.\n\n**Implementation guidance**\n- Validate the configuration object against the expected schema for the post-submit hook.\n- Support extensibility to allow additional parameters without breaking existing functionality.\n- Ensure sensitive information within the configuration is handled securely.\n- Provide clear error messages if required configuration fields are missing or invalid.\n\n**Examples**\n- Setting a timeout duration for the post-submit process.\n- Specifying environment variables needed during hook execution.\n- Enabling or disabling certain features of the post-submit hook via flags.\n\n**Important notes**\n- The configuration must align with the capabilities of the post-submit hook implementation.\n- Incorrect or incomplete configuration may cause the hook to fail or behave unexpectedly.\n- Configuration changes typically require validation and testing before deployment.\n\n**Dependency chain**\n- Depends on the post-submit hook being triggered successfully.\n- May interact with other hooks or system components based on configuration.\n- Relies on the underlying system to interpret and apply configuration settings correctly.\n\n**Technical details**\n- Typically represented as a JSON object or equivalent data structure.\n- Supports nested objects and arrays for complex configurations.\n- May include string, numeric, boolean, or null values depending on parameters.\n- Parsed and applied at runtime during the post-submit hook execution phase."}},"description":"A callback function or hook that is executed immediately after a form submission process completes. This hook allows for custom logic to be run post-submission, such as handling responses, triggering notifications, updating UI elements, or performing cleanup tasks.\n\n**Field behavior**\n- Invoked only after the form submission has finished, regardless of success or failure.\n- Receives submission result data or error information as arguments.\n- Can be asynchronous to support operations like API calls or state updates.\n- Does not affect the submission process itself but handles post-submission side effects.\n\n**Implementation guidance**\n- Ensure the function handles both success and error scenarios gracefully.\n- Avoid long-running synchronous operations to prevent blocking the UI.\n- Use this hook to trigger any follow-up actions such as analytics tracking or user feedback.\n- Validate that the hook is defined before invocation to prevent runtime errors.\n\n**Examples**\n- Logging submission results to the console.\n- Displaying a success message or error notification to the user.\n- Redirecting the user to a different page after submission.\n- Resetting form fields or updating application state based on submission outcome.\n\n**Important notes**\n- This hook is optional; if not provided, no post-submission actions will be performed.\n- It should not be used to modify the submission data itself; that should be handled before submission.\n- Proper error handling within this hook is crucial to avoid unhandled exceptions.\n\n**Dependency chain**\n- Depends on the form submission process completing.\n- May interact with state management or UI components updated after submission.\n- Can be linked with pre-submit hooks for comprehensive form lifecycle management.\n\n**Technical details**\n- Typically implemented as a function accepting parameters such as submission response and error.\n- Can return a promise to support asynchronous operations.\n- Should be registered in the form configuration under the hooks.postSubmit property.\n- Execution context may vary depending on the form library or framework used."},"postAggregate":{"type":"object","properties":{"function":{"type":"string","description":"Specifies the function to be executed during the post-aggregation hook phase. This function is invoked after the aggregation process completes, allowing for custom processing, transformation, or validation of the aggregated data before it is returned or further processed. The function should be designed to handle the aggregated data structure and can modify, enrich, or analyze the results as needed.\n\n**Field behavior**\n- Executed after the aggregation operation finishes.\n- Receives the aggregated data as input.\n- Can modify or augment the aggregated results.\n- Supports custom logic tailored to specific post-processing requirements.\n\n**Implementation guidance**\n- Ensure the function signature matches the expected input and output formats.\n- Handle potential errors within the function to avoid disrupting the overall aggregation flow.\n- Keep the function efficient to minimize latency in the post-aggregation phase.\n- Validate the output of the function to maintain data integrity.\n\n**Examples**\n- A function that filters aggregated results based on certain criteria.\n- A function that calculates additional metrics from the aggregated data.\n- A function that formats or restructures the aggregated output for downstream consumption.\n\n**Important notes**\n- The function must be deterministic and side-effect free to ensure consistent results.\n- Avoid long-running operations within the function to prevent performance bottlenecks.\n- The function should not alter the original data source, only the aggregated output.\n\n**Dependency chain**\n- Depends on the completion of the aggregation process.\n- May rely on the schema or structure of the aggregated data.\n- Can be linked to subsequent hooks or processing steps that consume the modified data.\n\n**Technical details**\n- Typically implemented as a callback or lambda function.\n- May be written in the same language as the aggregation engine or as a supported scripting language.\n- Should conform to the API’s expected interface for hook functions.\n- Execution context may include metadata about the aggregation operation."},"_scriptId":{"type":"string","format":"objectId","description":"The unique identifier of the script to be executed after the aggregation process completes. This ID links the post-aggregation hook to a specific script that contains the logic or operations to be performed. It ensures that the correct script is triggered in the post-aggregation phase, enabling customized processing or data manipulation.\n\n**Field behavior**\n- Accepts a string representing the script's unique ID.\n- Used exclusively in the post-aggregation hook context.\n- Triggers the execution of the associated script after aggregation.\n- Must correspond to an existing and accessible script within the system.\n\n**Implementation guidance**\n- Validate that the script ID exists and is active before assignment.\n- Ensure the script linked by this ID is compatible with post-aggregation data.\n- Handle errors gracefully if the script ID is invalid or the script execution fails.\n- Maintain security by restricting script IDs to authorized scripts only.\n\n**Examples**\n- \"script12345\"\n- \"postAggScript_v2\"\n- \"cleanup_after_aggregation\"\n\n**Important notes**\n- The script ID must be unique within the scope of available scripts.\n- Changing the script ID will alter the behavior of the post-aggregation hook.\n- The script referenced should be optimized for performance to avoid delays.\n- Ensure proper permissions are set for the script to execute in this context.\n\n**Dependency chain**\n- Depends on the existence of a script repository or registry.\n- Relies on the aggregation process completing successfully before execution.\n- Interacts with the postAggregate hook mechanism to trigger execution.\n\n**Technical details**\n- Typically represented as a string data type.\n- May be stored in a database or configuration file referencing script metadata.\n- Used by the system's execution engine to locate and run the script.\n- Supports integration with scripting languages or environments supported by the platform."},"_stackId":{"type":"string","format":"objectId","description":"The unique identifier for the stack associated with the post-aggregation hook. This ID is used to reference and manage the specific stack context within which the hook operates, ensuring that the correct stack resources and configurations are applied during the post-aggregation process.\n\n**Field behavior**\n- Uniquely identifies the stack instance related to the post-aggregation hook.\n- Used to link the hook execution context to the appropriate stack environment.\n- Immutable once set to maintain consistency throughout the hook lifecycle.\n\n**Implementation guidance**\n- Must be a valid and existing stack identifier within the system.\n- Should be assigned automatically by the system or provided explicitly during hook configuration.\n- Ensure proper validation to prevent referencing non-existent or unauthorized stacks.\n\n**Examples**\n- \"stack-12345\"\n- \"prod-stack-67890\"\n- \"dev-environment-stack-001\"\n\n**Important notes**\n- This ID is critical for resolving stack-specific resources and permissions.\n- Incorrect or missing _stackId values can lead to hook execution failures.\n- Should be handled securely to prevent unauthorized access to stack data.\n\n**Dependency chain**\n- Depends on the stack management system to generate and maintain stack IDs.\n- Utilized by the post-aggregation hook execution engine to retrieve stack context.\n- May influence downstream processes that rely on stack-specific configurations.\n\n**Technical details**\n- Typically represented as a string.\n- Format and length may vary depending on the stack management system.\n- Should be indexed or cached for efficient lookup during hook execution."},"configuration":{"type":"object","description":"Configuration settings for the post-aggregate hook that define its behavior and parameters during execution. This property allows customization of the hook's operation by specifying key-value pairs or structured data relevant to the hook's logic.\n\n**Field behavior**\n- Accepts structured data or key-value pairs that tailor the hook's functionality.\n- Influences how the post-aggregate hook processes data after aggregation.\n- Can be optional or required depending on the specific hook implementation.\n- Supports dynamic adjustment of hook behavior without code changes.\n\n**Implementation guidance**\n- Validate configuration inputs to ensure they conform to expected formats and types.\n- Document all configurable options clearly for users to understand their impact.\n- Allow for extensibility to accommodate future configuration parameters.\n- Ensure secure handling of configuration data to prevent injection or misuse.\n\n**Examples**\n- {\"threshold\": 10, \"mode\": \"strict\"}\n- {\"enableLogging\": true, \"retryCount\": 3}\n- {\"filters\": [\"status:active\", \"region:us-east\"]}\n\n**Important notes**\n- Incorrect configuration values may cause the hook to malfunction or produce unexpected results.\n- Configuration should be versioned or tracked to maintain consistency across deployments.\n- Sensitive information should not be included in configuration unless properly encrypted or secured.\n\n**Dependency chain**\n- Depends on the specific post-aggregate hook implementation to interpret configuration.\n- May interact with other hook properties such as conditions or triggers.\n- Configuration changes can affect downstream processing or output of the aggregation.\n\n**Technical details**\n- Typically represented as a JSON object or map structure.\n- Parsed and applied at runtime when the post-aggregate hook is invoked.\n- Supports nested structures for complex configuration scenarios.\n- Should be compatible with the overall API schema and validation rules."}},"description":"A hook function that is executed after the aggregate operation has completed. This hook allows for custom logic to be applied to the aggregated results before they are returned or further processed. It is typically used to modify, filter, or augment the aggregated data based on specific application requirements.\n\n**Field behavior**\n- Invoked immediately after the aggregate query execution.\n- Receives the aggregated results as input.\n- Can modify or replace the aggregated data.\n- Supports asynchronous operations if needed.\n- Does not affect the execution of the aggregate operation itself.\n\n**Implementation guidance**\n- Ensure the hook handles errors gracefully to avoid disrupting the response.\n- Use this hook to implement custom transformations or validations on aggregated data.\n- Avoid heavy computations to maintain performance.\n- Return the modified data or the original data if no changes are needed.\n- Consider security implications when modifying aggregated results.\n\n**Examples**\n- Filtering out sensitive fields from aggregated results.\n- Adding computed properties based on aggregation output.\n- Logging or auditing aggregated data for monitoring purposes.\n- Transforming aggregated data into a different format before sending to clients.\n\n**Important notes**\n- This hook is specific to aggregate operations and will not trigger on other query types.\n- Modifications in this hook do not affect the underlying database.\n- The hook should return the final data to be used downstream.\n- Properly handle asynchronous code to ensure the hook completes before response.\n\n**Dependency chain**\n- Triggered after the aggregate query execution phase.\n- Can influence the data passed to subsequent hooks or response handlers.\n- Dependent on the successful completion of the aggregate operation.\n\n**Technical details**\n- Typically implemented as a function accepting the aggregated results and context.\n- Supports both synchronous and asynchronous function signatures.\n- Receives parameters such as the aggregated data, query context, and hook metadata.\n- Expected to return the processed aggregated data or a promise resolving to it."}},"description":"Defines a collection of hook functions or callbacks that are triggered at specific points during the execution lifecycle of a process or operation. These hooks allow customization and extension of default behavior by executing user-defined logic before, after, or during certain events.\n\n**Field behavior**\n- Supports multiple hook functions, each associated with a particular event or lifecycle stage.\n- Hooks are invoked automatically by the system when their corresponding event occurs.\n- Can be used to modify input data, handle side effects, perform validations, or trigger additional processes.\n- Execution order of hooks may be sequential or parallel depending on implementation.\n\n**Implementation guidance**\n- Ensure hooks are registered with clear event names or identifiers.\n- Validate hook functions for correct signature and expected parameters.\n- Provide error handling within hooks to avoid disrupting the main process.\n- Document available hook points and expected behavior for each.\n- Allow hooks to be asynchronous if the environment supports it.\n\n**Examples**\n- A \"beforeSave\" hook that validates data before saving to a database.\n- An \"afterFetch\" hook that formats or enriches data after retrieval.\n- A \"beforeDelete\" hook that checks permissions before allowing deletion.\n- A \"onError\" hook that logs errors or sends notifications.\n\n**Important notes**\n- Hooks should not introduce significant latency or side effects that impact core functionality.\n- Avoid circular dependencies or infinite loops caused by hooks triggering each other.\n- Security considerations must be taken into account when executing user-defined hooks.\n- Hooks may have access to sensitive data; ensure proper access controls.\n\n**Dependency chain**\n- Hooks depend on the lifecycle events or triggers defined by the system.\n- Hook execution may depend on the successful completion of prior steps.\n- Hook results can influence subsequent processing stages or final outcomes.\n\n**Technical details**\n- Typically implemented as functions, methods, or callbacks registered in a map or list.\n- May support synchronous or asynchronous execution models.\n- Can accept parameters such as context objects, event data, or state information.\n- Return values from hooks may be used to modify behavior or data flow.\n- Often integrated with event emitter or observer patterns."},"sampleResponseData":{"type":"object","description":"Contains example data representing a typical response returned by the API endpoint. This sample response data helps developers understand the structure, format, and types of values they can expect when interacting with the API. It serves as a reference for testing, debugging, and documentation purposes.\n\n**Field behavior**\n- Represents a static example of the API's response payload.\n- Should reflect the most common or typical response scenario.\n- May include nested objects, arrays, and various data types to illustrate the response structure.\n- Does not affect the actual API behavior or response generation.\n\n**Implementation guidance**\n- Ensure the sample data is accurate and up-to-date with the current API response schema.\n- Include all required fields and typical optional fields to provide a comprehensive example.\n- Use realistic values that clearly demonstrate the data format and constraints.\n- Update the sample response whenever the API response structure changes.\n\n**Examples**\n- A JSON object showing user details returned from a user info endpoint.\n- An array of product items with fields like id, name, price, and availability.\n- A nested object illustrating a complex response with embedded metadata and links.\n\n**Important notes**\n- This data is for illustrative purposes only and should not be used as actual input or output.\n- It helps consumers of the API to quickly grasp the expected response format.\n- Should be consistent with the API specification and schema definitions.\n\n**Dependency chain**\n- Depends on the API endpoint's response schema and data model.\n- Should align with any validation rules or constraints defined for the response.\n- May be linked to example requests or other documentation elements for completeness.\n\n**Technical details**\n- Typically formatted as a JSON or XML snippet matching the API's response content type.\n- May include placeholder values or sample identifiers.\n- Should be parsable and valid according to the API's response schema."},"responseTransform":{"allOf":[{"description":"Data transformation configuration for reshaping API response data during import operations.\n\n**Import-specific behavior**\n\n**Response Transformation**: Transforms the response data returned by the destination system after records have been imported. This allows you to reshape, filter, or enrich the response before it is processed by downstream flow steps or returned to the client.\n\n**Common Use Cases**:\n- Extracting relevant fields from verbose API responses\n- Normalizing response formats across different destination systems\n- Adding computed fields based on response data\n- Filtering out sensitive information before logging or further processing\n"},{"$ref":"#/components/schemas/Transform"}]},"modelMetadata":{"type":"object","description":"Contains detailed metadata about the model, including its version, architecture, training data characteristics, and any relevant configuration parameters that define its behavior and capabilities. This information helps in understanding the model's provenance, performance expectations, and compatibility with various tasks or environments.\n**Field behavior**\n- Captures comprehensive details about the model's identity and configuration.\n- May include version numbers, architecture type, training dataset descriptions, and hyperparameters.\n- Used for tracking model lineage and ensuring reproducibility.\n- Supports validation and compatibility checks before deployment or usage.\n**Implementation guidance**\n- Ensure metadata is accurate and up-to-date with the current model iteration.\n- Include standardized fields for easy parsing and comparison across models.\n- Use consistent formatting and naming conventions for metadata attributes.\n- Allow extensibility to accommodate future metadata elements without breaking compatibility.\n**Examples**\n- Model version: \"v1.2.3\"\n- Architecture: \"Transformer-based, 12 layers, 768 hidden units\"\n- Training data: \"Dataset XYZ, 1 million samples, balanced classes\"\n- Hyperparameters: \"learning_rate=0.001, batch_size=32\"\n**Important notes**\n- Metadata should be immutable once the model is finalized to ensure traceability.\n- Sensitive information such as proprietary training data details should be handled carefully.\n- Metadata completeness directly impacts model governance and audit processes.\n- Incomplete or inaccurate metadata can lead to misuse or misinterpretation of the model.\n**Dependency chain**\n- Relies on accurate model training and version control systems.\n- Interacts with deployment pipelines that validate model compatibility.\n- Supports monitoring systems that track model performance over time.\n**Technical details**\n- Typically represented as a structured object or JSON document.\n- May include nested fields for complex metadata attributes.\n- Should be easily serializable and deserializable for storage and transmission.\n- Can be linked to external documentation or repositories for extended information."},"mapping":{"type":"object","properties":{"fields":{"type":"string","enum":["string","number","boolean","numberarray","stringarray","json"],"description":"Defines the collection of individual field mappings within the overall mapping configuration. Each entry specifies the characteristics, data types, and indexing options for a particular field in the dataset or document structure. This property enables precise control over how each field is interpreted, stored, and queried by the system.\n\n**Field behavior**\n- Contains a set of key-value pairs where each key is a field name and the value is its mapping definition.\n- Determines how data in each field is processed, indexed, and searched.\n- Supports nested fields and complex data structures.\n- Can include settings such as data type, analyzers, norms, and indexing options.\n\n**Implementation guidance**\n- Define all relevant fields explicitly to optimize search and storage behavior.\n- Use consistent naming conventions for field names.\n- Specify appropriate data types to ensure correct parsing and querying.\n- Include nested mappings for objects or arrays as needed.\n- Validate field definitions to prevent conflicts or errors.\n\n**Examples**\n- Mapping a text field with a custom analyzer.\n- Defining a date field with a specific format.\n- Specifying a keyword field for exact match searches.\n- Creating nested object fields with their own sub-fields.\n\n**Important notes**\n- Omitting fields may lead to default dynamic mapping behavior, which might not be optimal.\n- Incorrect field definitions can cause indexing errors or unexpected query results.\n- Changes to field mappings often require reindexing of existing data.\n- Field names should avoid reserved characters or keywords.\n\n**Dependency chain**\n- Depends on the overall mapping configuration context.\n- Influences indexing and search components downstream.\n- Interacts with analyzers, tokenizers, and query parsers.\n\n**Technical details**\n- Typically represented as a JSON or YAML object with field names as keys.\n- Each field mapping includes properties like \"type\", \"index\", \"analyzer\", \"fields\", etc.\n- Supports complex types such as objects, nested, geo_point, and geo_shape.\n- May include metadata fields for internal use."},"lists":{"type":"string","enum":["string","number","boolean","numberarray","stringarray"],"description":"A collection of lists that define specific groupings or categories used within the mapping context. Each list contains a set of related items or values that are referenced to organize, filter, or map data effectively. These lists facilitate structured data handling and improve the clarity and maintainability of the mapping configuration.\n\n**Field behavior**\n- Contains multiple named lists, each representing a distinct category or grouping.\n- Lists can be referenced elsewhere in the mapping to apply consistent logic or transformations.\n- Supports dynamic or static content depending on the mapping requirements.\n- Enables modular and reusable data definitions within the mapping.\n\n**Implementation guidance**\n- Ensure each list has a unique identifier or name for clear referencing.\n- Populate lists with relevant and validated items to avoid mapping errors.\n- Use lists to centralize repeated values or categories to simplify updates.\n- Consider the size and complexity of lists to maintain performance and readability.\n\n**Examples**\n- A list of country codes used for regional mapping.\n- A list of product categories for classification purposes.\n- A list of status codes to standardize state representation.\n- A list of user roles for access control mapping.\n\n**Important notes**\n- Lists should be kept up-to-date to reflect current data requirements.\n- Avoid duplication of items across different lists unless intentional.\n- The structure and format of list items must align with the overall mapping schema.\n- Changes to lists may impact dependent mapping logic; test thoroughly after updates.\n\n**Dependency chain**\n- Lists may depend on external data sources or configuration files.\n- Other mapping properties or rules may reference these lists for validation or transformation.\n- Updates to lists can cascade to affect downstream processing or output.\n\n**Technical details**\n- Typically represented as arrays or collections within the mapping schema.\n- Items within lists can be simple values (strings, numbers) or complex objects.\n- Supports nesting or hierarchical structures if the schema allows.\n- May include metadata or annotations to describe list purpose or usage."}},"description":"Defines the association between input data fields and their corresponding target fields or processing rules within the system. This mapping specifies how data should be transformed, routed, or interpreted during processing to ensure accurate and consistent handling. It serves as a blueprint for data translation and integration tasks, enabling seamless interoperability between different data formats or components.\n\n**Field behavior**\n- Maps source fields to target fields or processing instructions.\n- Determines how input data is transformed or routed.\n- Can include nested or hierarchical mappings for complex data structures.\n- Supports conditional or dynamic mapping rules based on input values.\n\n**Implementation guidance**\n- Ensure mappings are clearly defined and validated to prevent data loss or misinterpretation.\n- Use consistent naming conventions for source and target fields.\n- Support extensibility to accommodate new fields or transformation rules.\n- Provide mechanisms for error handling when mappings fail or are incomplete.\n\n**Examples**\n- Mapping a JSON input field \"user_name\" to a database column \"username\".\n- Defining a transformation rule that converts date formats from \"MM/DD/YYYY\" to \"YYYY-MM-DD\".\n- Routing data from an input field \"status\" to different processing modules based on its value.\n\n**Important notes**\n- Incorrect or incomplete mappings can lead to data corruption or processing errors.\n- Mapping definitions should be version-controlled to track changes over time.\n- Consider performance implications when applying complex or large-scale mappings.\n\n**Dependency chain**\n- Dependent on the structure and schema of input data.\n- Influences downstream data processing, validation, and storage components.\n- May interact with transformation, validation, and routing modules.\n\n**Technical details**\n- Typically represented as key-value pairs, dictionaries, or mapping objects.\n- May support expressions or functions for dynamic value computation.\n- Can be serialized in formats such as JSON, YAML, or XML for configuration.\n- Should include metadata for data types, optionality, and default values where applicable."},"mappings":{"allOf":[{"description":"Field mapping configuration for transforming incoming records during import operations.\n\n**Import-specific behavior**\n\n**Data Transformation**: Mappings define how fields from incoming records are transformed and mapped to the destination system's field structure. This enables data normalization, field renaming, value transformation, and complex nested object construction.\n\n**Common Use Cases**:\n- Mapping source field names to destination field names\n- Transforming data types and formats\n- Building nested object structures required by the destination\n- Applying formulas and lookups to derive field values\n"},{"$ref":"#/components/schemas/Mappings"}]},"lookups":{"type":"string","description":"A collection of predefined reference data or mappings used to standardize and validate input values within the system. Lookups typically consist of key-value pairs or enumerations that facilitate consistent data interpretation and reduce errors by providing a controlled vocabulary or set of options.\n\n**Field behavior**\n- Serves as a centralized source for reference data used across various components.\n- Enables validation and normalization of input values against predefined sets.\n- Supports dynamic retrieval of lookup values to accommodate updates without code changes.\n- May include hierarchical or categorized data structures for complex reference data.\n\n**Implementation guidance**\n- Ensure lookups are comprehensive and cover all necessary reference data for the application domain.\n- Design lookups to be easily extendable and maintainable, allowing additions or modifications without impacting existing functionality.\n- Implement caching mechanisms if lookups are frequently accessed to optimize performance.\n- Provide clear documentation for each lookup entry to aid developers and users in understanding their purpose.\n\n**Examples**\n- Country codes and names (e.g., \"US\" -> \"United States\").\n- Status codes for order processing (e.g., \"PENDING\", \"SHIPPED\", \"DELIVERED\").\n- Currency codes and symbols (e.g., \"USD\" -> \"$\").\n- Product categories or types used in an inventory system.\n\n**Important notes**\n- Lookups should be kept up-to-date to reflect changes in business rules or external standards.\n- Avoid hardcoding lookup values in multiple places; centralize them to maintain consistency.\n- Consider localization requirements if lookup values need to be presented in multiple languages.\n- Validate input data against lookups to prevent invalid or unsupported values from entering the system.\n\n**Dependency chain**\n- Often used by input validation modules, UI dropdowns, reporting tools, and business logic components.\n- May depend on external data sources or configuration files for initialization.\n- Can influence downstream processing and data storage by enforcing standardized values.\n\n**Technical details**\n- Typically implemented as dictionaries, maps, or database tables.\n- May support versioning to track changes over time.\n- Can be exposed via APIs for dynamic retrieval by client applications.\n- Should include mechanisms for synchronization if distributed across multiple systems."},"settingsForm":{"$ref":"#/components/schemas/Form"},"preSave":{"$ref":"#/components/schemas/PreSave"},"settings":{"$ref":"#/components/schemas/Settings"}}},"As2":{"type":"object","description":"Configuration for As2 imports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"The unique identifier for the Trading Partner (TP) Connector utilized within the AS2 communication framework. This identifier serves as a critical reference that links the AS2 configuration to the specific connector responsible for the secure transmission and reception of AS2 messages. It ensures that all message exchanges are accurately routed through the designated connector, thereby maintaining the integrity, security, and reliability of the communication process. This ID is fundamental for establishing, managing, and terminating AS2 sessions, enabling proper encryption, digital signing, and delivery confirmation of messages. It acts as a key element in the orchestration of AS2 workflows, ensuring seamless interoperability between trading partners.\n\n**Field behavior**\n- Uniquely identifies a TP Connector within the trading partner ecosystem.\n- Associates AS2 message exchanges with the appropriate connector configuration.\n- Essential for initiating, maintaining, and terminating AS2 communication sessions.\n- Immutable once set to preserve consistent routing and session management.\n- Validated against existing connectors to prevent misconfiguration.\n\n**Implementation guidance**\n- Must correspond to a valid, active connector ID registered in the trading partner management system.\n- Should be assigned during initial AS2 setup and remain unchanged unless a deliberate reconfiguration is required.\n- Implement validation checks to confirm the ID’s existence and compatibility with AS2 protocols before assignment.\n- Ensure synchronization between the connector ID and its configuration to avoid communication failures.\n- Handle updates cautiously, with appropriate notifications and fallback mechanisms to maintain session continuity.\n\n**Examples**\n- \"connector-12345\"\n- \"tp-connector-67890\"\n- \"as2-connector-001\"\n\n**Important notes**\n- Incorrect or invalid IDs can lead to failed message transmissions or misrouted communications.\n- The referenced connector must be fully configured to support AS2 standards, including security and protocol settings.\n- Changes to this identifier should be managed through controlled processes to prevent disruption of active AS2 sessions.\n- This ID is integral to the AS2 message lifecycle, impacting message encryption, signing, and delivery confirmation.\n\n**Dependency chain**\n- Relies on the existence and proper configuration of a TP Connector entity within the trading partner management system.\n- Interacts closely with AS2 configuration parameters that govern message handling, security credentials, and session management.\n- Dependent on system components responsible for connector registration, validation, and lifecycle management.\n\n**Technical details**\n- Represented as a string conforming to the identifier format defined by the trading partner management system.\n- Used internally by the AS2"},"fileNameTemplate":{"type":"string","description":"fileNameTemplate specifies the template string used to dynamically generate file names for AS2 message payloads during both transmission and receipt processes. This template allows the inclusion of customizable placeholders that are replaced at runtime with relevant contextual values such as timestamps, unique message identifiers, sender and receiver IDs, or other metadata. By leveraging these placeholders, the system ensures that generated file names are unique, descriptive, and meaningful, facilitating efficient file management, traceability, and downstream processing. The template is crucial for systematically organizing files, preventing naming conflicts, supporting audit trails, and enabling seamless integration with various file storage and transfer mechanisms.\n\n**Field behavior**\n- Defines the naming pattern and structure for AS2 message payload files.\n- Supports dynamic substitution of placeholders with runtime metadata values.\n- Ensures uniqueness and clarity in generated file names to prevent collisions.\n- Directly influences file storage organization, retrieval efficiency, and processing workflows.\n- Affects audit trails and logging by enforcing consistent and meaningful file naming conventions.\n- Applies uniformly to both outgoing and incoming AS2 message payloads to maintain consistency.\n- Allows flexibility to adapt file naming to organizational standards and compliance requirements.\n\n**Implementation guidance**\n- Use clear, descriptive placeholders such as {timestamp}, {messageId}, {senderId}, {receiverId}, and {date} to capture relevant metadata.\n- Validate templates to exclude invalid or unsupported characters based on the target file system’s constraints.\n- Incorporate date and time components to improve uniqueness and enable chronological sorting of files.\n- Explicitly include file extensions (e.g., .xml, .edi, .txt) to reflect the payload format and ensure compatibility with downstream systems.\n- Provide sensible default templates to maintain system functionality when custom templates are not specified.\n- Implement robust parsing and substitution logic to reliably handle all supported placeholders and edge cases.\n- Sanitize substituted values to prevent injection of unsafe, invalid, or reserved characters that could cause errors.\n- Consider locale and timezone settings when generating date/time placeholders to ensure consistency across systems.\n- Support escape sequences or delimiter customization to handle special characters within templates if necessary.\n\n**Examples**\n- \"AS2_{senderId}_{timestamp}.xml\"\n- \"{messageId}_{receiverId}_{date}.edi\"\n- \"payload_{timestamp}.txt\"\n- \"msg_{date}_{senderId}_{receiverId}_{messageId}.xml\"\n- \"AS2_{timestamp}_{messageId}.payload\"\n- \"inbound_{receiverId}_{date}_{messageId}.xml\"\n-"},"messageIdTemplate":{"type":"string","description":"messageIdTemplate is a customizable template string designed to generate the Message-ID header for AS2 messages, enabling the creation of unique, meaningful, and standards-compliant identifiers for each message. This template supports dynamic placeholders that are replaced with actual runtime values—such as timestamps, UUIDs, sequence numbers, sender-specific information, or other contextual data—ensuring that every Message-ID is globally unique and strictly adheres to RFC 5322 email header formatting standards. By defining the precise format of the Message-ID, this property plays a critical role in message tracking, logging, troubleshooting, auditing, and interoperability within AS2 communication workflows, facilitating reliable message correlation and lifecycle management.\n\n**Field behavior**\n- Specifies the exact format and structure of the Message-ID header in AS2 protocol messages.\n- Supports dynamic placeholders that are substituted with real-time values during message creation.\n- Guarantees global uniqueness of each Message-ID to prevent duplication and enable accurate message identification.\n- Directly influences message correlation, tracking, diagnostics, and auditing processes across AS2 systems.\n- Ensures compliance with email header syntax and AS2 protocol requirements to maintain interoperability.\n- Affects downstream processing by systems that rely on Message-ID for message lifecycle management.\n\n**Implementation guidance**\n- Use a clear and consistent placeholder syntax (e.g., `${placeholderName}`) for dynamic content insertion.\n- Incorporate unique elements such as UUIDs, timestamps, sequence numbers, sender domains, or other identifiers to guarantee global uniqueness.\n- Validate the generated Message-ID string against AS2 protocol specifications and RFC 5322 email header standards.\n- Ensure the final Message-ID is properly formatted, enclosed in angle brackets (`< >`), and free from invalid or disallowed characters.\n- Avoid embedding sensitive or confidential information within the Message-ID to mitigate potential security risks.\n- Thoroughly test the template to confirm compatibility with receiving AS2 systems and message processing components.\n- Consider the impact of the template on downstream systems that rely on Message-ID for message correlation and tracking.\n- Maintain consistency in template usage across environments to support reliable message auditing and troubleshooting.\n\n**Examples**\n- `<${uuid}@example.com>` — generates a Message-ID combining a UUID with a domain name.\n- `<${timestamp}.${sequence}@as2sender.com>` — includes a timestamp and sequence number for ordered uniqueness.\n- `<msg-${messageNumber}@${senderDomain}>` — uses message number and sender domain placeholders for traceability.\n- `<${date}-${uuid}@company"},"maxRetries":{"type":"number","description":"Maximum number of retry attempts for the AS2 message transmission in case of transient failures such as network errors, timeouts, or temporary unavailability of the recipient system. This setting controls how many times the system will automatically attempt to resend a message after an initial failure before marking the transmission as failed. It applies exclusively to retry attempts following the first transmission and does not influence the initial send operation.\n\n**Field behavior**\n- Specifies the total count of resend attempts after the initial transmission failure.\n- Retries are triggered only for recoverable, transient errors where subsequent attempts have a reasonable chance of success.\n- Retry attempts cease once the configured maximum number is reached, resulting in a final failure status.\n- Does not affect the initial message send; governs only the retry logic after the first failure.\n- Retry attempts are typically spaced out using backoff strategies to prevent overwhelming the network or recipient system.\n- Each retry is initiated only after the previous attempt has conclusively failed.\n- The retry mechanism respects configured backoff intervals and timing constraints to optimize network usage and avoid congestion.\n\n**Implementation guidance**\n- Select a balanced default value to avoid excessive retries that could cause delays or strain system resources.\n- Implement exponential backoff or incremental delay strategies between retries to reduce network congestion and improve success rates.\n- Ensure retry attempts comply with overall operation timeouts and do not exceed system or protocol limits.\n- Validate input to accept only non-negative integers; setting zero disables retries entirely.\n- Integrate retry logic with error detection, logging, and alerting mechanisms for comprehensive failure handling and monitoring.\n- Consider the impact of retries on message ordering and idempotency to prevent duplicate processing or inconsistent states.\n- Provide clear feedback and detailed logging for each retry attempt to facilitate troubleshooting and operational transparency.\n\n**Examples**\n- maxRetries: 3 — The system will attempt to resend the message up to three times after the initial failure before giving up.\n- maxRetries: 0 — No retries will be performed; the failure is reported immediately after the first unsuccessful attempt.\n- maxRetries: 5 — Allows up to five retry attempts, increasing the chance of successful delivery in unstable network conditions.\n- maxRetries: 1 — A single retry attempt is made, suitable for environments with low tolerance for delays.\n\n**Important notes**\n- Excessive retry attempts can increase latency and consume additional system and network resources.\n- Retry behavior should align with AS2 protocol standards and best practices to maintain compliance and interoperability.\n- This setting improves"},"headers":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the MIME type of the AS2 message content, indicating the format of the data being transmitted. This helps the receiving system understand how to process and interpret the message payload.\n\n**Field behavior**\n- Defines the content type of the AS2 message payload.\n- Used by the receiver to determine how to parse and handle the message.\n- Typically corresponds to standard MIME types such as \"application/edi-x12\", \"application/edi-consent\", or \"application/xml\".\n- May influence processing rules, such as encryption, compression, or validation steps.\n\n**Implementation guidance**\n- Ensure the value is a valid MIME type string.\n- Use standard MIME types relevant to AS2 transmissions.\n- Avoid custom or non-standard MIME types unless both sender and receiver explicitly support them.\n- Validate the type value against expected content formats in the AS2 communication agreement.\n\n**Examples**\n- \"application/edi-x12\"\n- \"application/edi-consent\"\n- \"application/xml\"\n- \"text/plain\"\n\n**Important notes**\n- The type field is critical for interoperability between AS2 trading partners.\n- Incorrect or missing type values can lead to message processing errors or rejection.\n- This field is part of the AS2 message headers and should conform to AS2 protocol specifications.\n\n**Dependency chain**\n- Depends on the content of the AS2 message payload.\n- Influences downstream processing components that handle message parsing and validation.\n- May be linked with other headers such as content-transfer-encoding or content-disposition.\n\n**Technical details**\n- Represented as a string in MIME type format.\n- Included in the AS2 message headers under the \"Content-Type\" header.\n- Should comply with RFC 2045 and RFC 2046 MIME standards."},"value":{"type":"string","description":"The value of the AS2 header, representing the content associated with the specified header name.\n\n**Field behavior**\n- Holds the actual content or data for the AS2 header identified by the corresponding header name.\n- Can be a string or any valid data type supported by the AS2 protocol for header values.\n- Used during the construction or parsing of AS2 messages to specify header details.\n\n**Implementation guidance**\n- Ensure the value conforms to the expected format and encoding for AS2 headers.\n- Validate the value against any protocol-specific constraints or length limits.\n- When setting this value, consider the impact on message integrity and compliance with AS2 standards.\n- Support dynamic assignment to accommodate varying header requirements.\n\n**Examples**\n- \"application/edi-x12\"\n- \"12345\"\n- \"attachment; filename=\\\"invoice.xml\\\"\"\n- \"UTF-8\"\n\n**Important notes**\n- The value must be compatible with the corresponding header name to maintain protocol correctness.\n- Incorrect or malformed values can lead to message rejection or processing errors.\n- Some headers may require specific formatting or encoding (e.g., MIME types, character sets).\n\n**Dependency chain**\n- Dependent on the `as2.headers.name` property which specifies the header name.\n- Influences the overall AS2 message headers structure and processing logic.\n\n**Technical details**\n- Typically represented as a string in the AS2 message headers.\n- May require encoding or escaping depending on the header content.\n- Should comply with RFC 4130 and related AS2 specifications for header formatting."},"_id":{"type":"object","description":"Unique identifier for the AS2 message header.\n\n**Field behavior**\n- Serves as a unique key to identify the AS2 message header within the system.\n- Typically assigned automatically by the system or provided by the client to ensure message traceability.\n- Used to correlate messages and track message processing status.\n\n**Implementation guidance**\n- Should be a string value that uniquely identifies the header.\n- Must be immutable once assigned to prevent inconsistencies.\n- Should follow a consistent format (e.g., UUID, GUID) to avoid collisions.\n- Ensure uniqueness across all AS2 message headers in the system.\n\n**Examples**\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"AS2Header_001\"\n- \"msg-header-20240427-0001\"\n\n**Important notes**\n- This field is critical for message tracking and auditing.\n- Avoid using easily guessable or sequential values to enhance security.\n- If not provided, the system should generate a unique identifier automatically.\n\n**Dependency chain**\n- May be referenced by other components or logs that track message processing.\n- Related to message metadata and processing status fields.\n\n**Technical details**\n- Data type: string\n- Format: typically UUID or a unique string identifier\n- Immutable after creation\n- Indexed for efficient lookup in databases or message stores"},"default":{"type":["object","null"],"description":"A boolean property that specifies whether the current header should be used as the default header in the AS2 message configuration. When set to true, this header will be applied automatically unless overridden by a more specific header setting.\n\n**Field behavior**\n- Determines if the header is the default choice for AS2 message headers.\n- If true, this header is applied automatically in the absence of other specific header configurations.\n- If false or omitted, the header is not considered the default and must be explicitly specified to be used.\n\n**Implementation guidance**\n- Use this property to designate a fallback or standard header configuration for AS2 messages.\n- Ensure only one header is marked as default to avoid conflicts.\n- Validate the boolean value to accept only true or false.\n- Consider the impact on message routing and processing when setting this property.\n\n**Examples**\n- `default: true` — This header is the default and will be used unless another header is specified.\n- `default: false` — This header is not the default and must be explicitly selected.\n\n**Important notes**\n- Setting multiple headers as default may lead to unpredictable behavior.\n- The default header is applied only when no other header overrides are present.\n- This property is optional; if omitted, the header is not treated as default.\n\n**Dependency chain**\n- Related to other header properties within `as2.headers`.\n- Influences message header selection logic in AS2 message processing.\n- May interact with routing or profile configurations that specify header usage.\n\n**Technical details**\n- Data type: boolean\n- Default value: false (if omitted)\n- Located under the `as2.headers` object in the configuration schema\n- Used during AS2 message construction to determine header application"}},"description":"A collection of HTTP headers included in the AS2 message transmission, serving as essential metadata and control information that governs the accurate handling, routing, authentication, and processing of the AS2 message between trading partners. These headers must strictly adhere to standard HTTP header formats and encoding rules, encompassing mandatory fields such as Content-Type, Message-ID, AS2-From, AS2-To, and other protocol-specific headers critical for AS2 communication. Proper configuration and validation of these headers are vital to maintaining message integrity, security (including encryption and digital signatures), and ensuring successful interoperability within the AS2 ecosystem. This collection supports both standard and custom headers as required by specific trading partner agreements or business needs, with header names treated as case-insensitive per HTTP standards.  \n**Field behavior:**  \n- Represents key-value pairs of HTTP headers transmitted alongside the AS2 message.  \n- Directly influences message routing, identification, authentication, security, and processing by the receiving system.  \n- Supports inclusion of both mandatory AS2 headers and additional custom headers as dictated by trading partner agreements or specific business requirements.  \n- Header names are case-insensitive, consistent with HTTP/1.1 specifications.  \n- Facilitates communication of protocol-specific instructions, security parameters (e.g., MIC, Disposition-Notification-To), and message metadata essential for AS2 transactions.  \n- Enables conveying information necessary for message encryption, signing, and receipt acknowledgments.  \n**Implementation guidance:**  \n- Ensure all header names and values conform strictly to HTTP/1.1 header syntax, character encoding, and escaping rules.  \n- Include all mandatory AS2 headers such as AS2-From, AS2-To, Message-ID, Content-Type, and security-related headers like MIC and Disposition-Notification-To.  \n- Avoid duplicate headers unless explicitly allowed by the AS2 specification or trading partner agreements.  \n- Validate header values rigorously for correctness, format compliance, and security to prevent injection attacks or protocol violations.  \n- Consider the impact of headers on message security features including encryption, digital signatures, and receipt acknowledgments.  \n- Maintain consistency with trading partner agreements regarding required, optional, and custom headers to ensure seamless interoperability.  \n- Use appropriate character encoding and escape sequences to handle special or non-ASCII characters within header values.  \n**Examples:**  \n- Content-Type: application/edi-x12  \n- AS2-From: PartnerA  \n- AS2-To: PartnerB  \n- Message-ID: <"}}},"Dynamodb":{"type":"object","description":"Configuration for Dynamodb exports","properties":{"region":{"type":"string","description":"Specifies the AWS region where the DynamoDB service is hosted and where all database operations will be executed. This property defines the precise geographical location of the DynamoDB instance, which is essential for optimizing application latency, ensuring compliance with data residency and sovereignty regulations, and aligning with organizational infrastructure strategies. Selecting the appropriate region helps minimize network latency by placing data physically closer to end-users or application servers, thereby improving performance and responsiveness. Additionally, the chosen region influences service availability, pricing structures, feature sets, and legal compliance requirements, making it a critical configuration parameter. The region value must be a valid AWS region identifier (such as \"us-east-1\" or \"eu-west-1\") and is used to construct the correct service endpoint URL for all DynamoDB interactions. Proper configuration of this property is vital for seamless integration with AWS SDKs, authentication mechanisms, and IAM policies that may be region-specific, ensuring secure and efficient access to DynamoDB resources.\n\n**Field behavior**\n- Determines the AWS data center location for all DynamoDB operations.\n- Directly impacts network latency, data residency, and compliance adherence.\n- Must be specified as a valid and supported AWS region code.\n- Influences the endpoint URL, authentication endpoints, and service routing.\n- Affects service availability, pricing, feature availability, and latency.\n- Remains consistent throughout the lifecycle of the application unless explicitly changed.\n\n**Implementation guidance**\n- Validate the region against the current list of supported AWS regions to prevent misconfiguration and runtime errors.\n- Use the region value to dynamically construct the DynamoDB service endpoint URL and configure SDK clients accordingly.\n- Allow the region to be configurable and overrideable to support testing environments, multi-region deployments, disaster recovery, or failover scenarios.\n- Ensure consistency of the region setting across all AWS service configurations within the application to avoid cross-region conflicts and data inconsistency.\n- Consider the implications of region selection on data sovereignty laws, compliance requirements, and organizational policies.\n- Monitor AWS announcements for new regions or changes in service availability to keep configurations up to date.\n\n**Examples**\n- \"us-east-1\" (Northern Virginia, USA)\n- \"eu-west-1\" (Ireland, Europe)\n- \"ap-southeast-2\" (Sydney, Australia)\n- \"ca-central-1\" (Canada Central)\n- \"sa-east-1\" (São Paulo, Brazil)\n\n**Important notes**\n- Selecting the correct region is crucial for optimizing application performance, cost efficiency,"},"method":{"type":"string","enum":["putItem","updateItem"],"description":"Method specifies the HTTP method to be used for the DynamoDB API request, such as GET, POST, PUT, DELETE, etc. This determines the type of operation to be performed on the DynamoDB service and how the request is processed. It defines the HTTP verb that dictates the action on the resource, typically aligning with CRUD operations (Create, Read, Update, Delete). While DynamoDB primarily uses POST requests for most operations, other methods may be applicable in limited scenarios. The chosen method directly impacts the request structure, headers, payload formatting, and response handling, ensuring that the request conforms to the expected protocol and operation semantics.\n\n**Field behavior**\n- Defines the HTTP verb for the API call, influencing the nature and intent of the operation.\n- Determines how the DynamoDB service interprets and processes the incoming request.\n- Typically corresponds to CRUD operations but is predominantly POST for DynamoDB API interactions.\n- Must be compatible with the DynamoDB API endpoint and the specific operation requirements.\n- Influences the construction of the HTTP request line, headers, and payload format.\n- Affects response handling and error processing based on the HTTP method semantics.\n\n**Implementation guidance**\n- Validate that the method value is one of the HTTP methods supported by DynamoDB, with POST being the standard and recommended method.\n- Use POST for the majority of DynamoDB operations, including queries, updates, and inserts.\n- Ensure the selected method aligns precisely with the intended DynamoDB operation to avoid protocol mismatches.\n- Handle method-specific headers, payload formats, and any protocol nuances, such as content-type and authorization headers.\n- Avoid unsupported or non-standard HTTP methods to prevent request failures or unexpected behavior.\n- Implement robust error handling for cases where the method is incompatible with the requested operation.\n\n**Examples**\n- POST (commonly used and recommended for all DynamoDB API operations)\n- GET (rarely used, potentially for specific read-only or diagnostic operations, though not standard for DynamoDB)\n- DELETE (used for deleting items or tables in DynamoDB, though typically encapsulated within POST requests)\n- PUT (generally not used in the DynamoDB API context but common in other RESTful APIs)\n\n**Important notes**\n- DynamoDB API predominantly supports POST; other HTTP methods may be unsupported or have limited applicability.\n- Using an incorrect HTTP method can cause request rejection, errors, or unintended side effects.\n- The method must strictly adhere to the DynamoDB API specification and AWS service requirements to ensure successful communication"},"tableName":{"type":"string","description":"The name of the DynamoDB table to be accessed or manipulated, serving as the primary identifier for routing API requests to the correct data store. This string must exactly match the name of an existing DynamoDB table within the specified AWS account and region, including case sensitivity. It is essential for performing operations such as reading, writing, updating, or deleting items within the table, ensuring that all data interactions target the intended resource accurately.\n\n**Field behavior**\n- Specifies the exact DynamoDB table targeted for all data operations.\n- Must be unique within the AWS account and region to prevent ambiguity.\n- Directs API calls to the appropriate table for the requested action.\n- Case-sensitive and must strictly adhere to DynamoDB naming conventions.\n- Acts as a critical routing parameter in API requests to identify the data source.\n\n**Implementation guidance**\n- Confirm the table name matches the existing DynamoDB table in AWS, respecting case sensitivity.\n- Avoid using reserved words or disallowed special characters to prevent API errors.\n- Validate the table name format programmatically before making API calls.\n- Manage table names through environment variables or configuration files to facilitate deployment across multiple environments (development, staging, production).\n- Ensure all application components referencing the table name are updated consistently if the table name changes.\n- Incorporate error handling to manage cases where the specified table does not exist or is inaccessible.\n\n**Examples**\n- \"Users\"\n- \"Orders2024\"\n- \"Inventory_Table\"\n- \"CustomerData\"\n- \"Sales-Data.2024\"\n\n**Important notes**\n- DynamoDB table names can be up to 255 characters in length.\n- Table names are case-sensitive and must be used consistently across all API calls.\n- The specified table must exist in the targeted AWS region before any operation is performed.\n- Changing the table name requires updating all dependent code and redeploying applications as necessary.\n- Incorrect table names will result in API errors such as ResourceNotFoundException.\n\n**Dependency chain**\n- Dependent on the AWS account and region context where DynamoDB is accessed.\n- Requires appropriate IAM permissions to perform operations on the specified table.\n- Works in conjunction with other DynamoDB parameters such as key schema, attribute definitions, and provisioned throughput settings.\n- Relies on network connectivity and AWS service availability for successful API interactions.\n\n**Technical details**\n- Represented as a string value.\n- Must conform to DynamoDB naming rules: allowed characters include alphanumeric characters, underscore (_), hy"},"partitionKey":{"type":"string","description":"partitionKey specifies the attribute name used as the primary partition key in a DynamoDB table. This key is essential for uniquely identifying each item within the table and determines the partition where the data is physically stored. It plays a fundamental role in data distribution, scalability, and query efficiency by enabling DynamoDB to hash the key value and allocate items across multiple storage partitions. The partition key must be present in every item and is required for all operations involving item creation, retrieval, update, and deletion. Selecting an appropriate partition key with high cardinality ensures even data distribution, prevents performance bottlenecks, and supports efficient query patterns. When combined with an optional sort key, it forms a composite primary key that enables more complex data models and range queries.\n\n**Field behavior**\n- Defines the primary attribute used to partition and organize data in a DynamoDB table.\n- Must have unique values within the context of the partition to ensure item uniqueness.\n- Drives the distribution of data across partitions to optimize performance and scalability.\n- Required for all item-level operations such as put, get, update, and delete.\n- Works in tandem with an optional sort key to form a composite primary key for more complex data models.\n\n**Implementation guidance**\n- Select an attribute with a high cardinality (many distinct values) to promote even data distribution and avoid hot partitions.\n- Ensure the partition key attribute is included in every item stored in the table.\n- Analyze application access patterns to choose a partition key that supports efficient queries and minimizes read/write contention.\n- Use a composite key (partition key + sort key) when you need to model one-to-many relationships or perform range queries.\n- Avoid using attributes with low variability or skewed distributions as partition keys to prevent performance bottlenecks.\n\n**Examples**\n- \"userId\" for applications where data is organized by individual users.\n- \"orderId\" in an e-commerce system to uniquely identify each order.\n- \"deviceId\" for storing telemetry data from IoT devices.\n- \"customerEmail\" when email addresses serve as unique identifiers.\n- \"regionCode\" combined with a sort key for geographic data partitioning.\n\n**Important notes**\n- The partition key is mandatory and immutable after table creation; changing it requires creating a new table.\n- The attribute type must be one of DynamoDB’s supported scalar types: String, Number, or Binary.\n- Partition key values are hashed internally by DynamoDB to determine the physical partition.\n- Using a poorly chosen partition key"},"sortKey":{"type":"string","description":"The attribute name designated as the sort key (also known as the range key) in a DynamoDB table. This key is essential for ordering and organizing items that share the same partition key, enabling efficient sorting, range queries, and precise retrieval of data within a partition. Together with the partition key, the sort key forms the composite primary key that uniquely identifies each item in the table. It plays a critical role in query operations by allowing filtering, sorting, and conditional retrieval based on its values, which can be of type String, Number, or Binary. Proper definition and usage of the sort key significantly enhance data access patterns, query performance, and cost efficiency in DynamoDB.\n\n**Field behavior**\n- Defines the attribute DynamoDB uses to sort and organize items within the same partition key.\n- Enables efficient range queries, ordered retrieval, and filtering of items.\n- Must be unique in combination with the partition key for each item to ensure item uniqueness.\n- Supports composite key structures when combined with the partition key.\n- Influences the physical storage order of items within a partition, optimizing query performance.\n\n**Implementation guidance**\n- Specify the attribute name exactly as defined in the DynamoDB table schema.\n- Ensure the attribute type matches the table definition (String, Number, or Binary).\n- Use this key to perform queries requiring sorting, range-based filtering, or conditional retrieval.\n- Omit or set to null if the DynamoDB table does not define a sort key.\n- Validate that the sort key attribute exists and is properly indexed in the table schema before use.\n- Consider the sort key design carefully to support intended query patterns and access efficiency.\n\n**Examples**\n- \"timestamp\"\n- \"createdAt\"\n- \"orderId\"\n- \"category#date\"\n- \"eventDate\"\n- \"userScore\"\n\n**Important notes**\n- The sort key is optional; some DynamoDB tables only have a partition key without a sort key.\n- Queries specifying both partition key and sort key conditions are more efficient and cost-effective.\n- The sort key attribute must be pre-defined in the table schema and cannot be added dynamically.\n- The combination of partition key and sort key must be unique for each item.\n- Sort key values influence the physical storage order of items within a partition, impacting query speed.\n- Using a well-designed sort key can reduce the need for additional indexes and improve scalability.\n\n**Dependency chain**\n- Requires a DynamoDB table configured with a composite primary key (partition key"},"itemDocument":{"type":"string","description":"The `itemDocument` property represents the complete and authoritative data record stored within a DynamoDB table. It is structured as a JSON object that comprehensively includes all attribute names and their corresponding values, fully reflecting the schema and data model of the DynamoDB item. This property encapsulates every attribute of the item, including mandatory primary keys, optional sort keys (if applicable), and any additional data fields, supporting complex nested objects, maps, lists, and arrays to accommodate DynamoDB’s rich data types. It serves as the primary representation of an item for all item-level operations such as reading (GetItem), writing (PutItem), updating (UpdateItem), and deleting (DeleteItem) within DynamoDB, ensuring data integrity and consistency throughout these processes.\n\n**Field behavior**\n- Contains the full and exact representation of a DynamoDB item as a structured JSON document.\n- Includes all required attributes such as primary keys and sort keys, as well as any optional or additional fields.\n- Supports complex nested data structures including maps, lists, and sets, consistent with DynamoDB’s flexible data model.\n- Acts as the main payload for CRUD (Create, Read, Update, Delete) operations on DynamoDB items.\n- Reflects either the current stored state or the intended new state of the item depending on the operation context.\n- Can represent partial documents when used with projection expressions or update operations that modify subsets of attributes.\n\n**Implementation guidance**\n- Ensure strict adherence to DynamoDB’s supported data types (String, Number, Binary, Boolean, Null, List, Map, Set) and attribute naming conventions.\n- Validate presence and correct formatting of primary key attributes to uniquely identify the item for key-based operations.\n- For update operations, provide either the entire item document or only the attributes to be modified, depending on the API method and update strategy.\n- Maintain consistent attribute names and data types across operations to preserve schema integrity and prevent runtime errors.\n- Utilize attribute projections or partial documents when full item data is unnecessary to optimize performance, reduce latency, and minimize cost.\n- Handle nested structures carefully to ensure proper serialization and deserialization between application code and DynamoDB.\n\n**Examples**\n- `{ \"UserId\": \"12345\", \"Name\": \"John Doe\", \"Age\": 30, \"Preferences\": { \"Language\": \"en\", \"Notifications\": true } }`\n- `{ \"OrderId\": \"A100\", \"Date\": \"2024-06-01\", \"Items\":"},"updateExpression":{"type":"string","description":"A string that defines the specific attributes to be updated, added, or removed within a DynamoDB item using DynamoDB's UpdateExpression syntax. This expression provides precise control over how item attributes are modified by including one or more of the following clauses: SET (to assign or overwrite attribute values), REMOVE (to delete attributes), ADD (to increment numeric attributes or add elements to sets), and DELETE (to remove elements from sets). The updateExpression enables atomic, conditional, and efficient updates without replacing the entire item, ensuring data integrity and minimizing write costs. It must be carefully constructed using placeholders for attribute names and values—ExpressionAttributeNames and ExpressionAttributeValues—to avoid conflicts with reserved keywords and prevent injection vulnerabilities. This expression is essential for performing complex update operations in a single request, supporting both simple attribute changes and advanced manipulations like incrementing counters or modifying sets.\n\n**Field behavior**\n- Specifies one or more atomic update operations on a DynamoDB item's attributes.\n- Supports combining multiple update actions (SET, REMOVE, ADD, DELETE) within a single expression, separated by spaces.\n- Requires strict adherence to DynamoDB's UpdateExpression syntax and semantics.\n- Works in conjunction with ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values.\n- Only affects the attributes explicitly mentioned; all other attributes remain unchanged.\n- Enables conditional updates when combined with ConditionExpression for safe concurrent modifications.\n\n**Implementation guidance**\n- Construct the updateExpression using valid DynamoDB syntax, incorporating clauses such as SET, REMOVE, ADD, and DELETE as needed.\n- Always use placeholders (e.g., '#attrName' for attribute names and ':value' for attribute values) to avoid conflicts with reserved words and enhance security.\n- Validate the expression syntax and ensure all referenced placeholders are defined before executing the update operation.\n- Combine multiple update actions by separating them with spaces within the same expression string.\n- Test expressions thoroughly to prevent runtime errors caused by syntax issues or missing placeholders.\n- Use SET for assigning or overwriting attribute values, REMOVE for deleting attributes, ADD for incrementing numbers or adding set elements, and DELETE for removing set elements.\n- Avoid using reserved keywords directly; always substitute them with ExpressionAttributeNames placeholders.\n\n**Examples**\n- \"SET #name = :newName, #age = :newAge REMOVE #obsoleteAttribute\"\n- \"ADD #score :incrementValue DELETE #tags :tagToRemove\"\n- \"SET #status = :statusValue\"\n- \"REMOVE #temporaryField"},"conditionExpression":{"type":"string","description":"A string that defines the conditional logic to be applied to a DynamoDB operation, specifying the precise criteria that must be met for the operation to proceed. This expression leverages DynamoDB's condition expression syntax to evaluate attributes of the targeted item, enabling fine-grained control over operations such as PutItem, UpdateItem, DeleteItem, and transactional writes. It supports a comprehensive set of logical operators (AND, OR, NOT), comparison operators (=, <, >, BETWEEN, IN), functions (attribute_exists, attribute_not_exists, begins_with, contains, size), and attribute path notations to construct complex, nested conditions. This ensures data integrity by preventing unintended modifications and enforcing business rules at the database level.\n\n**Field behavior**\n- Determines whether the DynamoDB operation executes based on the evaluation of the condition against the item's attributes.\n- Evaluated atomically before performing the operation; if the condition evaluates to false, the operation is aborted and no changes are made.\n- Supports combining multiple conditions using logical operators for complex, multi-attribute criteria.\n- Utilizes built-in functions to inspect attribute existence, value patterns, and sizes, enabling sophisticated conditional checks.\n- Applies exclusively to the specific item targeted by the operation, including nested attributes accessed via attribute path notation.\n- Influences transactional operations by ensuring all conditions in a transaction are met before committing.\n\n**Implementation guidance**\n- Always use ExpressionAttributeNames and ExpressionAttributeValues to safely reference attribute names and values, avoiding conflicts with reserved keywords and improving readability.\n- Validate the syntax of the condition expression prior to request submission to prevent runtime errors and failed operations.\n- Carefully construct expressions to handle edge cases, such as attributes that may be missing or have null values.\n- Combine multiple conditions logically to enforce precise constraints and business logic on the operation.\n- Test condition expressions thoroughly across diverse data scenarios to ensure expected behavior and robustness.\n- Keep expressions as simple and efficient as possible to optimize performance and reduce request complexity.\n\n**Examples**\n- \"attribute_exists(#name) AND #age >= :minAge\"\n- \"#status = :activeStatus\"\n- \"attribute_not_exists(#id)\"\n- \"#score BETWEEN :low AND :high\"\n- \"begins_with(#title, :prefix)\"\n- \"contains(#tags, :tagValue) OR size(#comments) > :minComments\"\n\n**Important notes**\n- The conditionExpression is optional but highly recommended to safeguard against unintended overwrites, deletions, or updates.\n- If the condition"},"expressionAttributeNames":{"type":"string","description":"expressionAttributeNames is a map of substitution tokens used to safely reference attribute names within DynamoDB expressions. This property allows you to define placeholder tokens for attribute names that might otherwise cause conflicts due to being reserved words, containing special characters, or having names that are dynamically generated. By using these placeholders, you can avoid syntax errors and ambiguities in expressions such as ProjectionExpression, FilterExpression, UpdateExpression, and ConditionExpression, ensuring your queries and updates are both valid and maintainable.\n\n**Field behavior**\n- Functions as a dictionary where each key is a placeholder token starting with a '#' character, and each value is the actual attribute name it represents.\n- Enables the use of reserved words, special characters, or otherwise problematic attribute names within DynamoDB expressions.\n- Supports complex expressions by allowing multiple attribute name substitutions within a single expression.\n- Prevents syntax errors and improves clarity by explicitly mapping placeholders to attribute names.\n- Applies only within the scope of the expression where it is defined, ensuring localized substitution without affecting other parts of the request.\n\n**Implementation guidance**\n- Always prefix keys with a '#' to clearly indicate they are substitution tokens.\n- Ensure each placeholder token is unique within the scope of the expression to avoid conflicts.\n- Use expressionAttributeNames whenever attribute names are reserved words, contain special characters, or are dynamically generated.\n- Combine with expressionAttributeValues when expressions require both attribute name and value substitutions.\n- Verify that the attribute names used as values exactly match those defined in your DynamoDB table schema to prevent runtime errors.\n- Keep the mapping concise and relevant to the attributes used in the expression to maintain readability.\n- Avoid overusing substitution tokens for attribute names that do not require it, to keep expressions straightforward.\n\n**Examples**\n- `{\"#N\": \"Name\", \"#A\": \"Age\"}` to substitute attribute names \"Name\" and \"Age\" in an expression.\n- `{\"#status\": \"Status\"}` when \"Status\" is a reserved word or needs escaping in an expression.\n- `{\"#yr\": \"Year\", \"#mn\": \"Month\"}` for expressions involving multiple date-related attributes.\n- `{\"#addr\": \"Address\", \"#zip\": \"ZipCode\"}` to handle attribute names with special characters or spaces.\n- `{\"#P\": \"Price\", \"#Q\": \"Quantity\"}` used in an UpdateExpression to increment numeric attributes.\n\n**Important notes**\n- Keys must always begin with the '#' character; otherwise, the substitution"},"expressionAttributeValues":{"type":"string","description":"expressionAttributeValues is a map of substitution tokens for attribute values used within DynamoDB expressions. These tokens serve as placeholders that safely inject values into expressions such as ConditionExpression, FilterExpression, UpdateExpression, and KeyConditionExpression. By separating data from code, this mechanism prevents injection attacks and syntax errors, ensuring secure and reliable query and update operations. Each key in this map is a placeholder string that begins with a colon (\":\"), and each corresponding value is a DynamoDB attribute value object representing the actual data to be used in the expression. This design enforces that user input is never directly embedded into expressions, enhancing both security and maintainability.\n\n**Field behavior**\n- Contains key-value pairs where keys are placeholders prefixed with a colon (e.g., \":val1\") used within DynamoDB expressions.\n- Each key maps to a DynamoDB attribute value object specifying the data type and value, supporting all DynamoDB data types including strings, numbers, binaries, lists, maps, sets, and booleans.\n- Enables safe and dynamic substitution of attribute values in expressions, avoiding direct string concatenation and reducing the risk of injection vulnerabilities.\n- Mandatory when expressions reference attribute values to ensure correct parsing and execution of queries or updates.\n- Placeholders are case-sensitive and must exactly match those used in the corresponding expressions.\n- Supports complex data structures, allowing nested maps and lists as attribute values for advanced querying scenarios.\n\n**Implementation guidance**\n- Always prefix placeholder keys with a colon (\":\") to clearly identify them as substitution tokens within expressions.\n- Ensure every placeholder referenced in an expression has a corresponding entry in expressionAttributeValues to avoid runtime errors.\n- Format each value according to DynamoDB’s attribute value specification, such as { \"S\": \"string\" }, { \"N\": \"123\" }, { \"BOOL\": true }, or complex nested structures.\n- Avoid using reserved words, spaces, or invalid characters in placeholder names to prevent syntax errors and conflicts.\n- Validate that the data types of values align with the expected attribute types defined in the table schema to maintain data integrity.\n- When multiple expressions are used simultaneously, maintain unique and consistent placeholder names to prevent collisions and ambiguity.\n- Use expressionAttributeValues in conjunction with expressionAttributeNames when attribute names are reserved words or contain special characters to ensure proper expression parsing.\n- Consider the size limitations of expressionAttributeValues to avoid exceeding DynamoDB’s request size limits.\n\n**Examples**\n- { \":val1\": { \"S\": \""},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from DynamoDB data should be ignored or skipped during the operation, enabling conditional control over data retrieval to optimize performance, reduce latency, or avoid redundant processing in data workflows. This flag allows systems to bypass the extraction step when data is already available, cached, or deemed unnecessary for the current operation, thereby improving efficiency and resource utilization.  \n**Field** BEHAVIOR:  \n- When set to true, the extraction of data from DynamoDB is completely bypassed, preventing any data retrieval from the source.  \n- When set to false or omitted, the extraction process proceeds as normal, retrieving data for further processing.  \n- Primarily used to optimize workflows by skipping unnecessary extraction steps when data is already available or not needed.  \n- Influences the flow of data processing by controlling whether DynamoDB data is fetched or not.  \n- Affects downstream processing by potentially providing no new data input if extraction is skipped.  \n**Implementation guidance:**  \n- Accepts a boolean value: true to skip extraction, false to perform extraction.  \n- Ensure that downstream components can handle scenarios where no data is extracted due to this flag being true.  \n- Validate input to avoid unintended skipping of extraction that could cause data inconsistencies or stale results.  \n- Use this flag judiciously, especially in complex pipelines where data dependencies exist, to prevent breaking data integrity.  \n- Incorporate appropriate logging or monitoring to track when extraction is skipped for auditability and debugging.  \n**Examples:**  \n- ignoreExtract: true — extraction from DynamoDB is skipped, useful when data is cached or pre-fetched.  \n- ignoreExtract: false — extraction occurs normally, retrieving fresh data from DynamoDB.  \n- omit the property entirely — defaults to false, extraction proceeds as usual.  \n**Important notes:**  \n- Skipping extraction may result in incomplete, outdated, or stale data if subsequent operations rely on fresh DynamoDB data.  \n- This option should only be enabled when you are certain that extraction is unnecessary or redundant to avoid data inconsistencies.  \n- Consider the impact on data integrity, consistency, and downstream processing before enabling this flag.  \n- Misuse can lead to errors, unexpected behavior, or data quality issues in data workflows.  \n- Ensure that any caching or alternative data sources used in place of extraction are reliable and up-to-date.  \n**Dependency chain:**  \n- Requires a valid DynamoDB data source configuration to be relevant.  \n-"}}},"Http":{"type":"object","description":"Configuration for HTTP imports.\n\nIMPORTANT: When the _connectionId field points to a connection where the type is http, \nthis object MUST be populated for the import to function properly. This is a required configuration\nfor all HTTP based imports, as determined by the connection associated with the import.\n","properties":{"formType":{"type":"string","enum":["assistant","http","rest","graph_ql","assistant_graphql"],"description":"The form type to use for the import.\n- **assistant**: The import is for a system that we have a assistant form for.  This is the default value if there is a connector available for the system.\n- **http**: The import is an HTTP import.\n- **rest**: This value is only used for legacy imports that are not supported by the new HTTP framework.\n- **graph_ql**: The import is a GraphQL import.\n- **assistant_graphql**: The import is for a system that we have a assistant form for and utilizes GraphQL.  This is the default value if there is a connector available for the system and the connector supports GraphQL.\n"},"type":{"type":"string","enum":["file","records"],"description":"Specifies the type of data being imported via HTTP.\n\n- **file**: The import handles raw file content (binary data, PDF, images, etc.)\n- **records**: The import handles structured record data (JSON objects, XML records, etc.) - most common\n\nMost HTTP imports use \"records\" type for standard REST API data imports.\nOnly use \"file\" type when importing raw file content that will be processed downstream.\n"},"requestMediaType":{"type":"string","enum":["xml","json","csv","urlencoded","form-data","octet-stream","plaintext"],"description":"Specifies the media type (content type) of the request body sent to the target API.\n\n- **json**: For JSON request bodies (Content-Type: application/json) - most common\n- **xml**: For XML request bodies (Content-Type: application/xml)\n- **csv**: For CSV data (Content-Type: text/csv)\n- **urlencoded**: For URL-encoded form data (Content-Type: application/x-www-form-urlencoded)\n- **form-data**: For multipart form data\n- **octet-stream**: For binary data\n- **plaintext**: For plain text content\n"},"_httpConnectorEndpointIds":{"type":"array","items":{"type":"string","format":"objectId"},"description":"Array of HTTP connector endpoint IDs to use for this import.\nMultiple endpoints can be specified for different request types or operations.\n"},"blobFormat":{"type":"string","enum":["utf8","ucs2","utf-16le","ascii","binary","base64","hex"],"description":"Character encoding format for blob/binary data imports.\nOnly relevant when type is \"file\" or when handling binary content.\n"},"batchSize":{"type":"integer","description":"Maximum number of records to submit per HTTP request.\n\n- REST services typically use batchSize=1 (one record per request)\n- REST batch endpoints or RPC/XML services can handle multiple records\n- Affects throughput and API rate limiting\n\nDefault varies by API - consult target API documentation for optimal value.\n"},"successMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in successful responses.\n\n- **json**: For JSON responses (most common)\n- **xml**: For XML responses\n- **plaintext**: For plain text responses\n"},"requestType":{"type":"array","items":{"type":"string","enum":["CREATE","UPDATE"]},"description":"Specifies the type of operation(s) this import performs.\n\n- **CREATE**: Creating new records\n- **UPDATE**: Updating existing records\n\nFor composite/upsert imports, specify both operations. The array is positionally\naligned with relativeURI and method arrays — each index maps to one operation.\nThe existingExtract field determines which operation is used at runtime.\n\nExample composite upsert:\n```json\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"method\": [\"PUT\", \"POST\"],\n\"relativeURI\": [\"/customers/{{{data.0.customerId}}}.json\", \"/customers.json\"],\n\"existingExtract\": \"customerId\"\n```\n"},"errorMediaType":{"type":"string","enum":["xml","json","plaintext"],"description":"Specifies the media type expected in error responses.\n\n- **json**: For JSON error responses (most common)\n- **xml**: For XML error responses\n- **plaintext**: For plain text error messages\n"},"relativeURI":{"type":"array","items":{"type":"string"},"description":"**CRITICAL: This MUST be an array of strings, NOT an object.**\n\n**Handlebars data context:** At runtime, `relativeURI` templates always render against the\n**pre-mapped** record — the original input record that arrives at this import step, before\nthe Import's mapping step is applied. This is intentional: URI construction usually needs\nbusiness identifiers that mappings may rename or remove.\n\nArray of relative URI path(s) for the import endpoint.\nCan include handlebars expressions like `{{field}}` for dynamic values.\n\n**Single-operation imports (one element)**\n```json\n\"relativeURI\": [\"/api/v1/customers\"]\n```\n\n**Composite/upsert imports (multiple elements — positionally aligned)**\nFor imports with both CREATE and UPDATE in requestType, relativeURI, method,\nand requestType arrays are **positionally aligned**. Each index corresponds\nto one operation. The existingExtract field determines which index is used at\nruntime: if the existingExtract field has a value → use the UPDATE index;\nif empty/missing → use the CREATE index.\n\nExample of a composite upsert:\n```json\n\"relativeURI\": [\"/customers/{{{data.0.shopifyCustomerId}}}.json\", \"/customers.json\"],\n\"method\": [\"PUT\", \"POST\"],\n\"requestType\": [\"UPDATE\", \"CREATE\"],\n\"existingExtract\": \"shopifyCustomerId\"\n```\nHere index 0 is the UPDATE operation (PUT to a specific customer) and index 1 is\nthe CREATE operation (POST to the collection endpoint).\n\n**IMPORTANT**: For composite imports, use the `{{{data.0.fieldName}}}` handlebars\nsyntax (triple braces with data.0 prefix) to reference the existing record ID in\nthe UPDATE URI. Do NOT use `{{#if}}` conditionals in a single URI — use separate\narray elements instead.\n\n**Wrong format (DO not do THIS)**\n```json\n\"relativeURI\": {\"/api/v1/customers\": \"CREATE\"}\n```\n```json\n\"relativeURI\": [\"{{#if id}}/customers/{{id}}.json{{else}}/customers.json{{/if}}\"]\n```\n"},"method":{"type":"array","items":{"type":"string","enum":["GET","PUT","POST","PATCH","DELETE"]},"description":"HTTP method(s) used for import requests.\n\n- **POST**: Most common for creating new records\n- **PUT**: For full updates/replacements\n- **PATCH**: For partial updates\n- **DELETE**: For deletions\n- **GET**: Rarely used for imports\n\nFor composite/upsert imports, specify multiple methods positionally aligned\nwith relativeURI and requestType arrays. Each index maps to one operation.\n\nExample: `[\"PUT\", \"POST\"]` with `requestType: [\"UPDATE\", \"CREATE\"]` means\nindex 0 uses PUT for updates, index 1 uses POST for creates.\n"},"_httpConnectorVersionId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector version being used."},"_httpConnectorResourceId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector resource being used."},"_httpConnectorEndpointId":{"type":"string","format":"objectId","description":"Reference to the HTTP connector endpoint being used (single endpoint)."},"body":{"type":"array","items":{"type":"object"},"description":"**Important:** This field should be LEFT UNDEFINED for the vast majority of HTTP imports.\n\nThis is an OPTIONAL field that should only be set in very specific, rare cases. For standard REST API imports\nthat send JSON data, this field MUST be left undefined because data transformation is handled through the\nImport's mapping step, not through this body template.\n\n**When to leave this field undefined (MOST common CASE)**\n\nLeave this field undefined for ALL standard data imports, including:\n- REST API imports that send JSON records to create/update resources\n- APIs that accept JSON request bodies (the majority of modern APIs)\n- Any import where you want to use Import mappings to transform data\n- Standard CRUD operations (Create, Update, Delete) via REST APIs\n- GraphQL mutations that accept JSON input\n- Any import where the Import mapping step will handle data transformation\n\nExamples of imports that should have this field undefined:\n- \"Import customer data into Shopify\" → undefined (mappings handle the data transformation)\n- \"Create orders via custom REST API\" → undefined (mappings handle the data transformation)\n\n**KEY CONCEPT:** Imports have a dedicated \"Mapping\" step that transforms source data into the target\nformat. This is the standard, preferred approach. When the body field is set, the mapping step still runs\nfirst — the body template then renders against the **post-mapped** record instead of the destination API\nreceiving the mapped record as-is. This lets you post-process the mapped output into XML, SOAP envelopes,\nor other non-JSON shapes, but forces you to hand-template the entire request and is harder to maintain and debug.\n\n**When to set this field (RARE use CASE)**\n\nSet this field ONLY when:\n1. The target API requires XML or SOAP format (not JSON)\n2. You need a highly customized request structure that cannot be achieved through standard mappings\n3. You're working with a legacy API that requires very specific formatting\n4. The API documentation explicitly shows a complex XML/SOAP envelope structure\n5. When the user explicitly specifies a request body\n\nExamples of when to set body:\n- \"Import to SOAP web service\" → set body with XML/SOAP template\n- \"Send data to XML-only legacy API\" → set body with XML template\n- \"Complex SOAP envelope with multiple nested elements\" → set body with SOAP template\n- \"Create an import to /test with the body as {'key': 'value', 'key2': 'value2', ...}\"\n\n**Implementation details**\n\nWhen this field is undefined (default for most imports):\n- The system uses the Import's mapping step to transform data\n- Mappings provide a visual, intuitive way to map source fields to target fields\n- The mapped data is automatically sent as the request body\n- Easier to debug and maintain\n\nWhen this field is set:\n- The mapping step still runs — it produces a post-mapped record that is then fed into this body template\n- The template renders against the **post-mapped** record (use `{{record.<mappedField>}}` to reference mapping outputs)\n- You are responsible for shaping the final request body (XML/SOAP/custom JSON) by wrapping / restructuring the mapped data\n- Harder to maintain and debug than relying on mappings alone\n\n**Decision flowchart**\n\n1. Does the target API accept JSON request bodies?\n   → YES: Leave this field undefined (use mappings instead)\n2. Are you comfortable using the Import's mapping step?\n   → YES: Leave this field undefined\n3. Does the API require XML or SOAP format?\n   → YES: Set this field with appropriate XML/SOAP template\n4. Does the API require a highly unusual request structure that mappings can't handle?\n   → YES: Consider setting this field (but try mappings first)\n\nRemember: When in doubt, leave this field undefined. Almost all modern REST APIs work perfectly\nwith the standard mapping approach, and this is much easier to use and maintain.\n"},"existingExtract":{"type":"string","description":"Field name or JSON path used to determine if a record already exists in the destination,\nfor composite (upsert) imports that have both CREATE and UPDATE request types.\n\nWhen the import has requestType: [\"UPDATE\", \"CREATE\"] (or [\"CREATE\", \"UPDATE\"]):\n- If the field specified by existingExtract has a value in the incoming record,\n  the UPDATE operation (PUT) is used with the corresponding relativeURI and method.\n- If the field is empty or missing, the CREATE operation (POST) is used.\n\nThis field drives the upsert decision and must match a field that is populated by\nan upstream lookup or response mapping (e.g., a destination system ID like \"shopifyCustomerId\").\n\nIMPORTANT: Only set this field for composite/upsert imports that have BOTH CREATE and UPDATE\nin requestType. Do not set for single-operation imports.\n"},"ignoreExtract":{"type":"string","description":"JSON path to the field in the record that is used to determine if the record already exists.\nOnly used when ignoreMissing or ignoreExisting is set.\n"},"endPointBodyLimit":{"type":"integer","description":"Maximum size limit for the request body in bytes.\nUsed to enforce API-specific size constraints.\n"},"headers":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"value":{"type":"string"}}},"description":"Custom HTTP headers to include with import requests.\n\nEach header has a name and value, which can include handlebars expressions.\n\nAt runtime, header value templates render against the **pre-mapped**\nrecord (the original input record, before the Import's mapping step).\nUse `{{record.<field>}}` to reference fields as they appear in the\nupstream source — mappings don't rename or drop fields for header\nevaluation.\n"},"response":{"type":"object","properties":{"resourcePath":{"type":"array","items":{"type":"string"},"description":"JSON path to the resource collection in the response.\nRequired when batchSize > 1.\n"},"resourceIdPath":{"type":"array","items":{"type":"string"},"description":"JSON path to the ID field in each resource.\nIf not specified, system looks for \"id\" or \"_id\" fields.\n"},"successPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates success.\nUsed for APIs that don't rely solely on HTTP status codes.\n"},"successValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at successPath that indicate success.\n"},"failPath":{"type":"array","items":{"type":"string"},"description":"JSON path to a field that indicates failure (even if status=200).\n"},"failValues":{"type":"array","items":{"type":"array","items":{"type":"string"}},"description":"Values at failPath that indicate failure.\n"},"errorPath":{"type":"string","description":"JSON path to error message in the response.\n"},"allowArrayforSuccessPath":{"type":"boolean","description":"Whether to allow array values for successPath evaluation.\n"},"hasHeader":{"type":"boolean","description":"Indicates if the first record in the response is a header row (for CSV responses).\n"}},"description":"Configuration for parsing and interpreting HTTP responses.\n"},"_asyncHelperId":{"type":"string","format":"objectId","description":"Reference to an AsyncHelper resource for handling asynchronous import operations.\n\nUsed when the target API uses a \"fire-and-check-back\" pattern (HTTP 202, polling, etc.).\n"},"ignoreLookupName":{"type":"string","description":"Name of the lookup to use for checking resource existence.\nOnly used when ignoreMissing or ignoreExisting is set.\n"}}},"Ftp":{"type":"object","description":"Configuration for Ftp exports","properties":{"_tpConnectorId":{"type":"string","format":"objectId","description":"Identifier for the third-party connector utilized in the FTP integration, serving as a unique and immutable reference that links the FTP configuration to an external connector service. This identifier is essential for accurately associating the FTP connection with the intended third-party system, enabling seamless and secure data transfer, synchronization, and management through the specified connector. It ensures that all FTP operations are correctly routed and managed via the external service, supporting reliable integration workflows and maintaining system integrity throughout the connection lifecycle.\n\n**Field behavior**\n- Uniquely identifies the third-party connector associated with the FTP configuration.\n- Acts as a key reference to bind FTP settings to a specific external connector service.\n- Required when creating, updating, or managing FTP connections via third-party integrations.\n- Typically immutable after initial assignment to preserve connection consistency.\n- Used by system components to route FTP operations through the correct external connector.\n\n**Implementation guidance**\n- Should be a string or numeric identifier that aligns with the third-party connector’s identification scheme.\n- Must be validated against the system’s registry of available connectors to ensure validity.\n- Should be set securely and protected from unauthorized modification.\n- Updates to this identifier should be handled cautiously, with appropriate validation and impact assessment.\n- Ensure the identifier complies with any formatting or naming conventions imposed by the third-party service.\n- Transmit and store this identifier securely, using encryption where applicable.\n\n**Examples**\n- \"connector_12345\"\n- \"tpConnector-67890\"\n- \"ftpThirdPartyConnector01\"\n- \"extConnector_abcde\"\n- \"thirdPartyFTP_001\"\n\n**Important notes**\n- This field is critical for correctly mapping FTP configurations to their corresponding third-party services.\n- An incorrect or missing _tpConnectorId can lead to connection failures, data misrouting, or integration errors.\n- Synchronization between this identifier and the third-party connector records must be maintained to avoid inconsistencies.\n- Changes to this field may require re-authentication or reconfiguration of the FTP integration.\n- Proper error handling should be implemented to manage cases where the identifier does not match any registered connector.\n\n**Dependency chain**\n- Depends on the existence and registration of third-party connectors within the system.\n- Referenced by FTP connection management modules, authentication services, and integration workflows.\n- Interacts with authorization mechanisms to ensure secure access to third-party services.\n- May influence logging, monitoring, and auditing processes related to FTP integrations.\n\n**Technical details**\n- Data type: string (or"},"directoryPath":{"type":"string","description":"The path to the directory on the FTP server where files will be accessed, stored, or managed. This path can be specified either as an absolute path starting from the root directory of the FTP server or as a relative path based on the user's initial directory upon login, depending on the server's configuration and permissions. It is essential that the path adheres to the FTP server's directory structure, naming conventions, and access controls to ensure successful file operations such as uploading, downloading, listing, or deleting files. The directoryPath serves as the primary reference point for all file-related commands during an FTP session, determining the scope and context of file management activities.\n\n**Field behavior**\n- Defines the target directory location on the FTP server for all file-related operations.\n- Supports both absolute and relative path formats, interpreted according to the FTP server’s setup.\n- Must comply with the server’s directory hierarchy, naming rules, and access permissions.\n- Used by the FTP client to navigate and perform operations within the specified directory.\n- Influences the scope of accessible files and subdirectories during FTP sessions.\n- Changes to this path affect subsequent file operations until updated or reset.\n\n**Implementation guidance**\n- Validate the directory path format to ensure compatibility with the FTP server, typically using forward slashes (`/`) as separators.\n- Normalize the path to remove redundant elements such as `.` (current directory) and `..` (parent directory) to prevent errors and security vulnerabilities.\n- Handle scenarios where the specified directory does not exist by either creating it if permissions allow or returning a clear, descriptive error message.\n- Properly encode special characters and escape sequences in the path to conform with FTP protocol requirements and server expectations.\n- Provide informative and user-friendly error messages when the path is invalid, inaccessible, or lacks sufficient permissions.\n- Sanitize input to prevent directory traversal attacks or unauthorized access to restricted areas.\n- Consider server-specific behaviors such as case sensitivity, symbolic links, and virtual directories when interpreting the path.\n- Ensure that path changes are atomic and consistent to avoid race conditions during concurrent FTP operations.\n\n**Examples**\n- `/uploads/images`\n- `documents/reports/2024`\n- `/home/user/data`\n- `./backup`\n- `../shared/resources`\n- `/var/www/html`\n- `projects/current`\n\n**Important notes**\n- Access to the specified directory depends on the credentials used during FTP authentication.\n- Directory permissions directly affect the ability to read, write, or list files within the"},"fileName":{"type":"string","description":"The name of the file to be accessed, transferred, or manipulated via FTP (File Transfer Protocol). This property specifies the exact filename, including its extension, uniquely identifying the file within the FTP directory. It is critical for accurately targeting the file during operations such as upload, download, rename, or delete. The filename must be precise and comply with the naming conventions and restrictions of the FTP server’s underlying file system to avoid errors. Case sensitivity depends on the server’s operating system, and special attention should be given to character encoding, especially when handling non-ASCII or international characters. The filename should not include any directory path separators, as these are managed separately in directory or path properties.\n\n**Field behavior**\n- Identifies the specific file within the FTP server directory for various file operations.\n- Must include the file extension to ensure precise identification.\n- Case sensitivity is determined by the FTP server’s operating system.\n- Excludes directory or path information; only the filename itself is specified.\n- Used in conjunction with directory or path properties to locate the full file path.\n\n**Implementation guidance**\n- Validate that the filename contains only characters allowed by the FTP server’s file system.\n- Ensure the filename is a non-empty, well-formed string without path separators.\n- Normalize case if required to match server-specific case sensitivity rules.\n- Properly handle encoding for special or international characters to prevent transfer errors.\n- Combine with directory or path properties to construct the complete file location.\n- Avoid embedding directory paths or separators within the filename property.\n\n**Default behavior**\n- When the user does not specify a file naming convention, default to timestamped filenames using {{timestamp}} (e.g., \"items-{{timestamp}}.csv\"). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Examples**\n- \"report.pdf\"\n- \"image_2024.png\"\n- \"backup_2023_12_31.zip\"\n- \"data.csv\"\n- \"items-{{timestamp}}.csv\"\n\n**Important notes**\n- The filename alone may not be sufficient to locate the file without the corresponding directory or path.\n- Different FTP servers and operating systems may have varying rules regarding case sensitivity.\n- Path separators such as \"/\" or \"\\\" must not be included in the filename.\n- Adherence to FTP server naming conventions and restrictions is essential to prevent errors.\n- Be mindful of potential filename length limits imposed by the FTP server or protocol.\n\n**Dependency chain**\n- Depends on FTP directory or path properties to specify the full file location.\n- Utilized alongside FTP server credentials and connection settings.\n- May be influenced by file transfer mode (binary or ASCII) based on the file type.\n\n**Technical details**\n- Data type: string\n- Must conform to the FTP server’s"},"inProgressFileName":{"type":"string","description":"inProgressFileName is the designated temporary filename assigned to a file during the FTP upload process to clearly indicate that the file transfer is currently underway and not yet complete. This naming convention serves as a safeguard to prevent other systems or processes from prematurely accessing, processing, or locking the file before the upload finishes successfully. Typically, once the upload is finalized, the file is renamed to its intended permanent filename, signaling that it is ready for further use or processing. Utilizing an in-progress filename helps maintain data integrity, avoid conflicts, and streamline workflow automation in environments where files are continuously transferred and monitored. It plays a critical role in managing file state visibility, ensuring that incomplete or partially transferred files are not mistakenly processed, which could lead to data corruption or inconsistent system behavior.\n\n**Field behavior**\n- Acts as a temporary marker for files actively being uploaded via FTP or similar protocols.\n- Prevents other processes, systems, or automated workflows from accessing or processing the file until the upload is fully complete.\n- The file is renamed atomically from the inProgressFileName to the final intended filename upon successful upload completion.\n- Helps manage concurrency, avoid file corruption, partial reads, or race conditions during file transfer.\n- Indicates the current state of the file within the transfer lifecycle, providing clear visibility into upload progress.\n- Supports cleanup mechanisms by identifying orphaned or stale temporary files resulting from interrupted uploads.\n\n**Implementation guidance**\n- Choose a unique, consistent, and easily identifiable naming pattern distinct from final filenames to clearly denote the in-progress status.\n- Commonly use suffixes or prefixes such as \".inprogress\", \".tmp\", \"_uploading\", or similar conventions that are recognized across systems.\n- Ensure the FTP server and client support atomic renaming operations to avoid partial file visibility or race conditions.\n- Verify that the chosen temporary filename does not collide with existing files in the target directory to prevent overwriting or confusion.\n- Maintain consistent naming conventions across all related systems, scripts, and automation tools for clarity and maintainability.\n- Implement robust cleanup routines to detect and remove orphaned or stale in-progress files from failed or abandoned uploads.\n- Coordinate synchronization between upload completion and renaming to prevent premature processing or file locking issues.\n\n**Examples**\n- \"datafile.csv.inprogress\"\n- \"upload.tmp\"\n- \"report_20240615.tmp\"\n- \"file_upload_inprogress\"\n- \"transaction_12345.uploading\"\n\n**Important notes**\n- This filename is strictly temporary and should never be"},"backupDirectoryPath":{"type":"string","description":"backupDirectoryPath specifies the file system path to the directory where backup files are stored during FTP operations. This directory serves as the designated location for saving copies of files before they are overwritten or deleted, ensuring that original data can be recovered if necessary. The path can be either absolute or relative, depending on the system context, and must conform to the operating system’s file path conventions. It is essential that this path is valid, accessible, and writable by the FTP service or application performing the backups to guarantee reliable backup creation and data integrity. Proper configuration of this path is critical to prevent data loss and to maintain a secure and organized backup environment.  \n**Field behavior:**  \n- Specifies the target directory for storing backup copies of files involved in FTP transactions.  \n- Activated only when backup functionality is enabled within the FTP process.  \n- Requires the path to be valid, accessible, and writable to ensure successful backup creation.  \n- Supports both absolute and relative paths, with relative paths resolved against a defined base directory.  \n- If unset or empty, backup operations may be disabled or fallback to a default directory if configured.  \n**Implementation guidance:**  \n- Validate the existence of the directory and verify write permissions before initiating backups.  \n- Normalize paths to handle trailing slashes, redundant separators, and platform-specific path formats.  \n- Provide clear and actionable error messages if the directory is invalid, inaccessible, or lacks necessary permissions.  \n- Support environment variables or placeholders for dynamic path resolution where applicable.  \n- Enforce security best practices by restricting access to the backup directory to authorized users or processes only.  \n- Ensure sufficient disk space is available in the backup directory to accommodate backup files without interruption.  \n- Implement concurrency controls such as file locking or synchronization if multiple backup operations may occur simultaneously.  \n**Examples:**  \n- \"/var/ftp/backups\"  \n- \"C:\\\\ftp\\\\backup\"  \n- \"./backup_files\"  \n- \"/home/user/ftp_backup\"  \n**Important notes:**  \n- The backup directory must have adequate storage capacity to prevent backup failures due to insufficient space.  \n- Permissions must allow the FTP process or service to create and modify files within the directory.  \n- Backup files may contain sensitive or confidential data; appropriate security measures should be enforced to protect them.  \n- Path formatting should align with the conventions of the underlying operating system to avoid errors.  \n- Failure to properly configure this path can result in loss of backup data or inability to recover"}}},"Jdbc":{"type":"object","description":"Configuration for JDBC import operations. Defines how data is written to a database\nvia a JDBC connection.\n\n**Query type determines which fields are required**\n\n| queryType        | Required fields              | Do NOT set        |\n|------------------|------------------------------|-------------------|\n| [\"per_record\"]   | query (array of SQL strings) | bulkInsert        |\n| [\"per_page\"]     | query (array of SQL strings) | bulkInsert        |\n| [\"bulk_insert\"]  | bulkInsert object            | query             |\n| [\"bulk_load\"]    | bulkLoad object              | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]\nWrong: \"INSERT INTO users ...\"\nWrong: [{\"query\": \"INSERT INTO users ...\"}]\n","properties":{"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"] or [\"per_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars {{fieldName}} syntax to inject values from incoming records.\n\n**Format — array of strings**\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n\n**Examples**\n- INSERT: [\"INSERT INTO users (name, email) VALUES ('{{name}}', '{{email}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{qty}} WHERE sku = '{{sku}}'\"]\n- MERGE/UPSERT: [\"MERGE INTO target USING (SELECT CAST(? AS VARCHAR) AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = ? WHEN NOT MATCHED THEN INSERT (name, email) VALUES (?, ?)\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{name}}'\n- Numbers: no quotes — {{quantity}}\n"},"queryType":{"type":"array","items":{"type":"string","enum":["bulk_insert","per_record","per_page","bulk_load"]},"description":"Execution strategy for the SQL operation. REQUIRED. Must be an array with one value.\n\n**Decision tree**\n\n1. If UPDATE or UPSERT/MERGE → [\"per_record\"] (set query field)\n2. If INSERT with \"ignore existing\" / \"skip duplicates\" / match logic → [\"per_record\"] (set query field)\n3. If pure INSERT with no duplicate checking → [\"bulk_insert\"] (set bulkInsert object)\n4. If high-volume bulk load → [\"bulk_load\"] (set bulkLoad object)\n\n**Critical relationship to other fields**\n| queryType        | REQUIRES              | DO NOT SET   |\n|------------------|-----------------------|--------------|\n| [\"per_record\"]   | query (array)         | bulkInsert   |\n| [\"per_page\"]     | query (array)         | bulkInsert   |\n| [\"bulk_insert\"]  | bulkInsert object     | query        |\n| [\"bulk_load\"]    | bulkLoad object       | query        |\n\n**Examples**\n- Per-record upsert: [\"per_record\"]\n- Bulk insert: [\"bulk_insert\"]\n- Bulk load: [\"bulk_load\"]\n"},"bulkInsert":{"type":"object","description":"Bulk insert configuration. REQUIRED when queryType is [\"bulk_insert\"]. DO NOT SET when queryType is [\"per_record\"].\n\nEnables efficient batch insertion of records into a database table without per-record SQL.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk insert. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"batchSize":{"type":"string","description":"Number of records per batch during bulk insert.\nLarger values improve throughput but use more memory.\nCommon values: \"1000\", \"5000\".\n"}}},"bulkLoad":{"type":"object","description":"Bulk load configuration. REQUIRED when queryType is [\"bulk_load\"]. Uses database-native bulk loading for maximum throughput.\n","properties":{"tableName":{"type":"string","description":"Target database table name for bulk load. REQUIRED.\nCan include schema qualifiers (e.g., \"schema.tableName\").\n"},"primaryKeys":{"type":["array","null"],"items":{"type":"string"},"description":"Primary key column names for upsert/merge during bulk load.\nWhen set, existing rows matching these keys are updated; non-matching rows are inserted.\nExample: [\"id\"] or [\"order_id\", \"product_id\"] for composite keys.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Lookup name, referenced in field mappings or ignore logic."},"query":{"type":"string","description":"SQL query for the lookup (e.g., \"SELECT id FROM users WHERE email = '{{email}}'\")."},"extract":{"type":"string","description":"Path in the lookup result to use as the value (e.g., \"id\")."},"_lookupCacheId":{"type":"string","format":"objectId","description":"Reference to a cached lookup for performance optimization."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup queries executed against the JDBC connection to resolve reference values.\nEach lookup runs a SQL query and extracts a value to use in field mappings.\n"}}},"Mongodb":{"type":"object","description":"Configuration for MongoDB imports. This schema defines ONLY the MongoDB-specific configuration properties.\n\n**IMPORTANT:** Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level. When generating this configuration, return ONLY the properties defined here (method, collection, ignoreExtract, etc.).","properties":{"method":{"type":"string","enum":["insertMany","updateOne"],"description":"The HTTP method to be used for the MongoDB API request, specifying the type of operation to perform on the database resource. This property dictates how the server interprets and processes the request, directly influencing the action taken on MongoDB collections or documents. Common HTTP methods include GET for retrieving data, POST for creating new entries, PUT or PATCH for updating existing documents, and DELETE for removing records. Each method determines the structure of the request payload, the expected response format, and the side effects on the database. Proper selection of the method ensures alignment with RESTful principles, maintains idempotency where applicable, and supports secure and predictable interactions with the MongoDB service.\n\n**Field behavior**\n- Defines the intended CRUD operation (Create, Read, Update, Delete) on MongoDB resources.\n- Controls server-side processing logic, affecting data retrieval, insertion, modification, or deletion.\n- Influences the presence, format, and content of request bodies and the nature of responses returned.\n- Determines idempotency and safety characteristics of the API call, impacting retry and error handling strategies.\n- Affects authorization and authentication requirements based on the sensitivity of the operation.\n- Impacts caching behavior and network efficiency depending on the method used.\n\n**Implementation guidance**\n- Use standard HTTP methods consistent with RESTful API design: GET for reading data, POST for creating new documents, PUT or PATCH for updating existing documents, and DELETE for removing documents.\n- Validate the method against a predefined set of allowed HTTP methods to ensure API consistency and prevent unsupported operations.\n- Ensure the selected method accurately reflects the intended database operation to avoid unintended data modifications or errors.\n- Consider the idempotency of each method when designing client-side logic, especially for update and delete operations.\n- Include appropriate headers, query parameters, and request body content as required by the specific method and MongoDB operation.\n- Implement robust error handling and validation to manage unsupported or incorrect method usage gracefully.\n- Document method-specific requirements and constraints clearly for API consumers.\n\n**Examples**\n- GET: Retrieve one or more documents from a MongoDB collection without modifying data.\n- POST: Insert a new document into a collection, creating a new resource.\n- PUT: Replace an entire existing document with a new version, ensuring full update.\n- PATCH: Apply partial updates to specific fields within an existing document.\n- DELETE: Remove a document or multiple documents from a collection permanently.\n\n**Important notes**\n- The HTTP method must be compatible with the MongoDB operation"},"collection":{"type":"string","description":"Specifies the exact name of the MongoDB collection where data operations—including insertion, querying, updating, or deletion—will be executed. This property defines the precise target collection within the database, establishing the scope and context of the data affected by the API call. The collection name must strictly adhere to MongoDB's naming conventions to ensure compatibility, prevent conflicts with system-reserved collections, and maintain database integrity. Proper naming facilitates efficient data organization, indexing, query performance, and overall database scalability.\n\n**Field behavior**\n- Identifies the specific MongoDB collection for all CRUD (Create, Read, Update, Delete) operations.\n- Directly determines which subset of data is accessed, modified, or deleted during the API request.\n- Must be a valid, non-empty UTF-8 string that complies with MongoDB collection naming rules.\n- Case-sensitive, meaning collections named \"Users\" and \"users\" are distinct and separate.\n- Influences the application’s data flow and operational context by specifying the data container.\n- Changes to this property dynamically alter the target dataset for the API operation.\n\n**Implementation guidance**\n- Validate that the collection name is a non-empty UTF-8 string without null characters, spaces, or invalid symbols such as '$' or '.' except where allowed.\n- Ensure the name does not start with the reserved prefix \"system.\" to avoid conflicts with MongoDB internal collections.\n- Adopt consistent, descriptive, and meaningful naming conventions to enhance database organization, maintainability, and clarity.\n- Verify the existence of the collection before performing operations; implement robust error handling for cases where the collection does not exist.\n- Consider how the collection name impacts indexing strategies, query optimization, sharding, and overall database performance.\n- Maintain consistency in collection naming across the application to prevent unexpected behavior or data access issues.\n- Avoid overly long or complex names to reduce potential errors and improve readability.\n\n**Examples**\n- \"users\"\n- \"orders\"\n- \"inventory_items\"\n- \"logs_2024\"\n- \"customer_feedback\"\n- \"session_data\"\n- \"product_catalog\"\n\n**Important notes**\n- Collection names are case-sensitive and must be used consistently throughout the application to avoid data inconsistencies.\n- Avoid reserved names and prefixes such as \"system.\" to prevent interference with MongoDB’s internal collections and operations.\n- The choice of collection name can significantly affect database performance, indexing efficiency, and scalability.\n- Renaming a collection requires updating all references in the application and may necessitate data"},"filter":{"type":"string","description":"A MongoDB query filter object used to specify detailed criteria for selecting documents within a collection. This filter defines the exact conditions that documents must satisfy to be included in the query results, leveraging MongoDB's comprehensive and expressive query syntax. It supports a wide range of operators and expressions to enable precise and flexible querying of documents based on field values, existence, data types, ranges, nested properties, array contents, and more. The filter allows for complex logical combinations and can target deeply nested fields using dot notation, making it a powerful tool for fine-grained data retrieval.\n\n**Field behavior**\n- Determines which documents in the MongoDB collection are matched and returned by the query.\n- Supports complex query expressions including comparison operators (`$eq`, `$gt`, `$lt`, `$gte`, `$lte`, `$ne`), logical operators (`$and`, `$or`, `$not`, `$nor`), element operators (`$exists`, `$type`), array operators (`$in`, `$nin`, `$all`, `$elemMatch`), and evaluation operators (`$regex`, `$expr`).\n- Enables filtering based on simple fields, nested document properties using dot notation, and array contents.\n- Supports querying for null values, missing fields, and specific data types.\n- Allows combining multiple conditions using logical operators to form complex queries.\n- If omitted or provided as an empty object, the query defaults to returning all documents in the collection without filtering.\n- Can be used to optimize data retrieval by narrowing down the result set to only relevant documents.\n\n**Implementation guidance**\n- Construct the filter using valid MongoDB query operators and syntax to ensure correct behavior.\n- Validate and sanitize all input used to build the filter to prevent injection attacks or malformed queries.\n- Utilize dot notation to query nested fields within embedded documents or arrays.\n- Optimize query performance by aligning filters with indexed fields in the MongoDB collection.\n- Handle edge cases such as null values, missing fields, and data type variations appropriately within the filter.\n- Test filters thoroughly to ensure they return expected results and handle all relevant data scenarios.\n- Consider the impact of filter complexity on query execution time and resource usage.\n- Use projection and indexing strategies in conjunction with filters to improve query efficiency.\n- Be mindful of MongoDB version compatibility when using advanced operators or expressions.\n\n**Examples**\n- `{ \"status\": \"active\" }` — selects documents where the `status` field exactly matches \"active\".\n- `{ \"age\": { \"$gte\":"},"document":{"type":"string","description":"The main content or data structure representing a single record in a MongoDB database, encapsulated as a JSON-like object known as a document. This document consists of key-value pairs where keys are field names and values can be of various data types, including nested documents, arrays, strings, numbers, booleans, dates, and binary data, allowing for flexible and hierarchical data representation. It serves as the fundamental unit of data storage and retrieval within MongoDB collections and typically includes a unique identifier field (_id) that ensures each document can be distinctly accessed. Documents can contain metadata, support complex nested structures, and vary in schema within the same collection, reflecting MongoDB's schema-less design. This flexibility enables dynamic and evolving data models suited for diverse application needs, supporting efficient querying, indexing, and atomic operations at the document level.\n\n**Field behavior**\n- Represents an individual MongoDB document stored within a collection.\n- Contains key-value pairs with field names as keys and diverse data types as values, including nested documents and arrays.\n- Acts as the primary unit for data storage, retrieval, and manipulation in MongoDB.\n- Includes a mandatory unique identifier field (_id) unless auto-generated by MongoDB.\n- Supports flexible schema, allowing documents within the same collection to have different structures.\n- Can include metadata fields and support indexing for optimized queries.\n- Allows for embedding related data directly within the document to reduce the need for joins.\n- Supports atomic operations at the document level, ensuring consistency during updates.\n\n**Implementation guidance**\n- Ensure compliance with MongoDB BSON format constraints for data types and structure.\n- Validate field names to exclude reserved characters such as '.' and '$' unless explicitly permitted.\n- Support serialization and deserialization processes between application-level objects and MongoDB documents.\n- Enable handling of nested documents and arrays to represent complex data models effectively.\n- Consider indexing frequently queried fields within the document to enhance performance.\n- Monitor document size to stay within MongoDB’s 16MB limit to avoid performance degradation.\n- Use appropriate data types to optimize storage and query efficiency.\n- Implement validation rules or schema enforcement at the application or database level if needed to maintain data integrity.\n\n**Examples**\n- { \"_id\": ObjectId(\"507f1f77bcf86cd799439011\"), \"name\": \"Alice\", \"age\": 30, \"address\": { \"street\": \"123 Main St\", \"city\": \"Metropolis\" } }\n- { \"productId\": \""},"update":{"type":"string","description":"Update operation details specifying the precise modifications to be applied to one or more documents within a MongoDB collection. This field defines the exact changes using MongoDB's rich set of update operators, enabling atomic and efficient modifications to fields, arrays, or embedded documents without replacing entire documents unless explicitly intended. It supports a comprehensive range of update operations such as setting new values, incrementing numeric fields, removing fields, appending or removing elements in arrays, renaming fields, and more, providing fine-grained control over document mutations. Additionally, it accommodates advanced update mechanisms including pipeline-style updates introduced in MongoDB 4.2+, allowing for complex transformations and conditional updates within a single operation. The update specification can also leverage array filters and positional operators to target specific elements within arrays, enhancing the precision and flexibility of updates.\n\n**Field behavior**\n- Specifies the criteria and detailed modifications to apply to matching documents in a collection.\n- Supports a wide array of MongoDB update operators like `$set`, `$unset`, `$inc`, `$push`, `$pull`, `$addToSet`, `$rename`, `$mul`, `$min`, `$max`, and others.\n- Can target single or multiple documents depending on the operation context and options such as `multi` or `upsert`.\n- Ensures atomicity of update operations to maintain data integrity and consistency across concurrent operations.\n- Allows both partial updates using operators and full document replacements when the update object is a complete document without operators.\n- Supports pipeline-style updates for complex, multi-stage transformations and conditional logic within updates.\n- Enables the use of array filters and positional operators to selectively update elements within arrays based on specified conditions.\n- Handles upsert behavior to insert new documents if no existing documents match the update criteria, when enabled.\n\n**Implementation guidance**\n- Validate the update object rigorously to ensure compliance with MongoDB update syntax and semantics, preventing runtime errors.\n- Favor using update operators for partial updates to optimize performance and minimize unintended data overwrites.\n- Implement graceful handling for cases where no documents match the update criteria, optionally supporting upsert behavior to insert new documents if none exist.\n- Explicitly control whether updates affect single or multiple documents to avoid accidental mass modifications.\n- Incorporate robust error handling for invalid update operations, schema conflicts, or violations of database constraints.\n- Consider the impact of update operations on indexes, triggers, and other database mechanisms to maintain overall system performance and data integrity.\n- Support advanced update features such as array filters, positional operators"},"upsert":{"type":"boolean","description":"Upsert is a boolean flag that controls whether a MongoDB update operation should insert a new document if no existing document matches the specified query criteria, or strictly update existing documents only. When set to true, the operation performs an atomic \"update or insert\" action: it updates the matching document if found, or inserts a new document constructed from the update criteria if none exists. This ensures that after the operation, a document matching the criteria will exist in the collection. When set to false, the operation only updates documents that already exist and skips the operation entirely if no match is found, preventing any new documents from being created. This flag is essential for scenarios where maintaining data integrity and avoiding unintended document creation is critical.\n\n**Field behavior**\n- Determines whether the update operation can create a new document when no existing document matches the query.\n- If true, performs an atomic operation that either updates an existing document or inserts a new one.\n- If false, restricts the operation to updating existing documents only, with no insertion.\n- Influences the database state by enabling conditional document creation during update operations.\n- Affects the outcome of update queries by potentially expanding the dataset with new documents.\n\n**Implementation guidance**\n- Use upsert=true when you need to guarantee the existence of a document after the operation, regardless of prior presence.\n- Use upsert=false to restrict changes strictly to existing documents, avoiding unintended document creation.\n- Ensure that when upsert is true, the update document includes all required fields to create a valid new document.\n- Validate that the query filter and update document are logically consistent to prevent unexpected or malformed insertions.\n- Consider the impact on database size, indexing, and performance when enabling upsert, as it may increase document count.\n- Test the behavior in your specific MongoDB driver and version to confirm upsert semantics and compatibility.\n- Be mindful of potential race conditions in concurrent environments that could lead to duplicate documents if not handled properly.\n\n**Examples**\n- `upsert: true` — Update the matching document if found; otherwise, insert a new document based on the update criteria.\n- `upsert: false` — Update only if a matching document exists; skip the operation if no match is found.\n- Using upsert to create a user profile document if it does not exist during an update operation.\n- Employing upsert in configuration management to ensure default settings documents are present and updated as needed.\n\n**Important notes**\n- Upsert operations can increase"},"ignoreExtract":{"type":"string","description":"Enter the path to the field in the source record that should be used to identify existing records. If a value is found for this field, then the source record will be considered an existing record.\n\nThe dynamic field list makes it easy for you to select the field. If a field contains special characters (which may be the case for certain APIs), then the field is enclosed with square brackets [ ], for example, [field-name].\n\n[*] indicates that the specified field is an array, for example, items[*].id. In this case, you should replace * with a number corresponding to the array item. The value for only this array item (not, the entire array) is checked.\n**CRITICAL:** JSON path to the field in the incoming/source record that should be used to determine if the record already exists in the target system.\n\n**This field is ONLY used when `ignoreExisting` is set to true.** Note: `ignoreExisting` is a separate property that is NOT part of this mongodb configuration.\n\nWhen `ignoreExisting: true` is set (at the import level), this field specifies which field from the incoming data should be checked for a value. If that field has a value, the record is considered to already exist and will be skipped.\n\n**When to set this field**\n\n**ALWAYS set this field** when:\n1. The `ignoreExisting` property will be set to `true` (for \"ignore existing\" scenarios)\n2. The user's prompt mentions checking a specific field to identify existing records\n3. The prompt includes phrases like:\n   - \"indicated by the [field] field\"\n   - \"matching the [field] field\"\n   - \"based on [field]\"\n   - \"using the [field] field\"\n   - \"if [field] is populated\"\n   - \"where [field] exists\"\n\n**How to determine the value**\n\n1. **Look for the identifier field in the prompt:**\n   - \"ignoring existing customers indicated by the **id field**\" → `ignoreExtract: \"id\"`\n   - \"skip existing vendors based on the **email field**\" → `ignoreExtract: \"email\"`\n   - \"ignore records where **customerId** is populated\" → `ignoreExtract: \"customerId\"`\n\n2. **Use the field from the incoming/source data** (NOT the target system field):\n   - This is the field path in the data coming INTO the import\n   - Should match a field that will be present in the source records\n\n3. **Apply special character handling:**\n   - If a field contains special characters (hyphens, spaces, etc.), enclose with square brackets: `[field-name]`, `[customer id]`\n   - For array notation, use `[*]` to indicate array items: `items[*].id` (replace `*` with index if needed)\n\n**Field path syntax**\n\n- **Simple field:** `id`, `email`, `customerId`\n- **Nested field:** `customer.id`, `billing.email`\n- **Field with special chars:** `[customer-id]`, `[external_id]`, `[Customer Name]`\n- **Array field:** `items[*].id` (or `items[0].id` for specific index)\n- **Nested in array:** `addresses[*].postalCode`\n\n**Examples**\n\nThese examples show ONLY the mongodb configuration (not the full import structure):\n\n**Example 1: Simple field**\nPrompt: \"Create customers while ignoring existing customers indicated by the id field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"customers\",\n  \"ignoreExtract\": \"id\"\n}\n```\n(Note: `ignoreExisting: true` should also be set, but at a different level - not shown here)\n\n**Example 2: Field with special characters**\nPrompt: \"Import vendors, skip existing ones based on the vendor-code field\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"vendors\",\n  \"ignoreExtract\": \"[vendor-code]\"\n}\n```\n\n**Example 3: Nested field**\nPrompt: \"Add accounts, ignore existing where customer.email matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"accounts\",\n  \"ignoreExtract\": \"customer.email\"\n}\n```\n\n**Example 4: No ignoreExisting scenario**\nPrompt: \"Create or update all customer records\"\n```json\n{\n  \"method\": \"updateOne\",\n  \"collection\": \"customers\"\n}\n```\n(ignoreExtract should NOT be set when ignoreExisting is not true)\n\n**Important notes**\n\n- **DO NOT SET** this field if `ignoreExisting` is not true\n- **`ignoreExisting` is NOT part of this mongodb schema** - it belongs at a different level\n- This specifies the SOURCE field, not the target MongoDB field\n- The field path is case-sensitive\n- Must be a valid field that exists in the incoming data\n- Works in conjunction with `ignoreExisting` - both must be set for this feature to work\n- If the specified field has a value in the incoming record, that record is considered existing and will be skipped\n\n**Common patterns**\n\n- \"ignore existing [records] indicated by the **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"skip existing [records] based on **[field]**\" → Set `ignoreExtract: \"[field]\"`\n- \"if **[field]** is populated\" → Set `ignoreExtract: \"[field]\"`\n- No mention of specific field for identification → May need to use default like \"id\" or \"_id\""},"ignoreLookupFilter":{"type":"string","description":"If you are adding documents to your MongoDB instance and you have the Ignore Existing flag set to true please enter a filter object here to find existing documents in this collection. The value of this field must be a valid JSON string describing a MongoDB filter object in the correct format and with the correct operators. Refer to the MongoDB documentation for the list of valid query operators and the correct filter object syntax.\n**CRITICAL:** A JSON-stringified MongoDB query filter used to search for existing documents in the target collection.\n\nIt defines the criteria to match incoming records against existing MongoDB documents. If the query returns a result, the record is considered \"existing\" and the operation (usually insert) is skipped.\n\n**Critical distinction vs `ignoreExtract`**\n- Use **`ignoreExtract`** when you just want to check if a field exists on the **INCOMING** record (no DB query).\n- Use **`ignoreLookupFilter`** when you need to query the **MONGODB DATABASE** to see if a record exists (e.g., \"check if email exists in DB\").\n\n**When to set this field**\nSet this field when:\n1. Top-level `ignoreExisting` is `true`.\n2. The prompt implies checking the database for duplicates (e.g., \"skip if email already exists\", \"prevent duplicate SKUs\", \"match on email\").\n\n**Format requirements**\n- Must be a valid **JSON string** representing a MongoDB query.\n- Use Handlebars `{{FieldName}}` to reference values from the incoming record.\n- Format: `\"{ \\\"mongoField\\\": \\\"{{incomingField}}\\\" }\"`\n\n**Examples**\n\n**Example 1: Match on a single field**\nPrompt: \"Insert users, skip if email already exists in the database\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"users\",\n  \"ignoreLookupFilter\": \"{\\\"email\\\":\\\"{{email}}\\\"}\"\n}\n```\n\n**Example 2: Match on multiple fields**\nPrompt: \"Add products, ignore if SKU and VerifyID match existing products\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"products\",\n  \"ignoreLookupFilter\": \"{\\\"sku\\\":\\\"{{sku}}\\\", \\\"verify_id\\\":\\\"{{verify_id}}\\\"}\"\n}\n```\n\n**Example 3: Using operators**\nPrompt: \"Import orders, ignore if order_id matches\"\n```json\n{\n  \"method\": \"insertMany\",\n  \"collection\": \"orders\",\n  \"ignoreLookupFilter\": \"{\\\"order_id\\\": { \\\"$eq\\\": \\\"{{id}}\\\" } }\"\n}\n```\n\n**Important notes**\n- The value MUST be a string (stringify the JSON).\n- Mongo field names (keys) must match the **collection schema**.\n- Handlebars variables (values) must match the **incoming data**."}}},"NetSuite":{"type":"object","description":"Configuration for NetSuite exports","properties":{"lookups":{"type":"object","properties":{"type":{"type":"string","description":"The type property specifies the category or classification of the lookup being referenced. It defines the nature or kind of data that the lookup represents, enabling the system to handle it appropriately based on its type.\n\n**Field behavior**\n- Determines the category or classification of the lookup.\n- Influences how the lookup data is processed and validated.\n- Helps in filtering or grouping lookups by their type.\n- Typically represented as a string or enumerated value indicating the lookup category.\n\n**Implementation guidance**\n- Define a clear set of allowed values or an enumeration for the type to ensure consistency.\n- Validate the type value against the predefined set during data input or API requests.\n- Use the type property to drive conditional logic or UI rendering related to the lookup.\n- Document all possible type values and their meanings for API consumers.\n\n**Examples**\n- \"country\" — representing a lookup of countries.\n- \"currency\" — representing a lookup of currency codes.\n- \"status\" — representing a lookup of status codes or states.\n- \"category\" — representing a lookup of item categories.\n\n**Important notes**\n- The type value should be consistent across the system to avoid ambiguity.\n- Changing the type of an existing lookup may affect dependent systems or data integrity.\n- The type property is essential for distinguishing between different lookup datasets.\n\n**Dependency chain**\n- May depend on or be referenced by other properties that require lookup data.\n- Used in conjunction with lookup values or keys to provide meaningful data.\n- Can influence validation rules or business logic applied to the lookup.\n\n**Technical details**\n- Typically implemented as a string or an enumerated type in the API schema.\n- Should be indexed or optimized for quick filtering and retrieval.\n- May be case-sensitive or case-insensitive depending on system design.\n- Should be included in API documentation with all valid values listed."},"searchField":{"type":"string","description":"searchField specifies the particular field or attribute within a dataset or index that the search operation should target. This property determines which field the search query will be applied to, enabling focused and efficient retrieval of relevant information.\n\n**Field behavior**\n- Defines the specific field in the data source to be searched.\n- Limits the search scope to the designated field, improving search precision.\n- Can accept field names as strings, corresponding to indexed or searchable attributes.\n- If omitted or null, the search may default to a predefined field or perform a broad search across multiple fields.\n\n**Implementation guidance**\n- Ensure the field name provided matches exactly with the field names defined in the data schema or index.\n- Validate that the field is searchable and supports the type of queries intended.\n- Consider supporting nested or compound field names if the data structure is hierarchical.\n- Provide clear error handling for invalid or unsupported field names.\n- Allow for case sensitivity or insensitivity based on the underlying search engine capabilities.\n\n**Examples**\n- \"title\" — to search within the title field of documents.\n- \"author.name\" — to search within a nested author name field.\n- \"description\" — to search within the description or summary field.\n- \"tags\" — to search within a list or array of tags associated with items.\n\n**Important notes**\n- The effectiveness of the search depends on the indexing and search capabilities of the specified field.\n- Using an incorrect or non-existent field name may result in no search results or errors.\n- Some fields may require special handling, such as tokenization or normalization, to support effective searching.\n- The property is case-sensitive if the underlying system treats field names as such.\n\n**Dependency chain**\n- Dependent on the data schema or index configuration defining searchable fields.\n- May interact with other search parameters such as search query, filters, or sorting.\n- Influences the search engine or database query construction.\n\n**Technical details**\n- Typically represented as a string matching the field identifier in the data source.\n- May support dot notation for nested fields (e.g., \"user.address.city\").\n- Should be sanitized to prevent injection attacks or malformed queries.\n- Used by the search backend to construct field-specific query clauses."},"expression":{"type":"string","description":"Expression used to define the criteria or logic for the lookup operation. This expression typically consists of a string that can include variables, operators, functions, or references to other data elements, enabling dynamic and flexible lookup conditions.\n\n**Field behavior**\n- Specifies the condition or formula that determines how the lookup is performed.\n- Can include variables, constants, operators, and functions to build complex expressions.\n- Evaluated at runtime to filter or select data based on the defined logic.\n- Supports dynamic referencing of other fields or parameters within the lookup context.\n\n**Implementation guidance**\n- Ensure the expression syntax is consistent with the supported expression language or parser.\n- Validate expressions before execution to prevent runtime errors.\n- Support common operators (e.g., ==, !=, >, <, AND, OR) and functions as per the system capabilities.\n- Allow expressions to reference other properties or variables within the lookup scope.\n- Provide clear error messages if the expression is invalid or cannot be evaluated.\n\n**Examples**\n- `\"status == 'active' AND score > 75\"`\n- `\"userRole == 'admin' OR accessLevel >= 5\"`\n- `\"contains(tags, 'urgent')\"`\n- `\"date >= '2024-01-01' AND date <= '2024-12-31'\"`\n\n**Important notes**\n- The expression must be a valid string conforming to the expected syntax.\n- Incorrect or malformed expressions may cause lookup failures or unexpected results.\n- Expressions should be designed to optimize performance, avoiding overly complex or resource-intensive logic.\n- The evaluation context (available variables and functions) should be clearly documented for users.\n\n**Dependency chain**\n- Depends on the lookup context providing necessary variables or data references.\n- May depend on the expression evaluation engine or parser integrated into the system.\n- Influences the outcome of the lookup operation and subsequent data processing.\n\n**Technical details**\n- Typically represented as a string data type.\n- Parsed and evaluated by an expression engine or interpreter at runtime.\n- May support standard expression languages such as SQL-like syntax, JSONPath, or custom domain-specific languages.\n- Should handle escaping and quoting of string literals within the expression."},"resultField":{"type":"string","description":"resultField specifies the name of the field in the lookup result that contains the desired value to be retrieved or used.\n\n**Field behavior**\n- Identifies the specific field within the lookup result data to extract.\n- Determines which piece of information from the lookup response is returned or processed.\n- Must correspond to a valid field name present in the lookup result structure.\n- Used to map or transform lookup data into the expected output format.\n\n**Implementation guidance**\n- Ensure the field name matches exactly with the field in the lookup result, including case sensitivity if applicable.\n- Validate that the specified field exists in all possible lookup responses to avoid runtime errors.\n- Use this property to customize which data from the lookup is utilized downstream.\n- Consider supporting nested field paths if the lookup result is a complex object.\n\n**Examples**\n- \"email\" — to extract the email address field from the lookup result.\n- \"userId\" — to retrieve the user identifier from the lookup data.\n- \"address.city\" — to access a nested city field within an address object in the lookup result.\n\n**Important notes**\n- Incorrect or misspelled field names will result in missing or null values.\n- The field must be present in the lookup response schema; otherwise, the lookup will not yield useful data.\n- This property does not perform any transformation; it only selects the field to be used.\n\n**Dependency chain**\n- Depends on the structure and schema of the lookup result data.\n- Used in conjunction with the lookup operation that fetches the data.\n- May influence downstream processing or mapping logic that consumes the lookup output.\n\n**Technical details**\n- Typically a string representing the key or path to the desired field in the lookup result object.\n- May support dot notation for nested fields if supported by the implementation.\n- Should be validated against the lookup result schema during configuration or runtime."},"includeInactive":{"type":"boolean","description":"IncludeInactive: Specifies whether to include inactive items in the lookup results.\n**Field behavior**\n- When set to true, the lookup results will contain both active and inactive items.\n- When set to false or omitted, only active items will be included in the results.\n- Affects the filtering logic applied during data retrieval for lookups.\n**Implementation guidance**\n- Default the value to false if not explicitly provided to avoid unintended inclusion of inactive items.\n- Ensure that the data source supports filtering by active/inactive status.\n- Validate the input to accept only boolean values.\n- Consider performance implications when including inactive items, especially if the dataset is large.\n**Examples**\n- includeInactive: true — returns all items regardless of their active status.\n- includeInactive: false — returns only items marked as active.\n**Important notes**\n- Including inactive items may expose deprecated or obsolete data; use with caution.\n- The definition of \"inactive\" depends on the underlying data model and should be clearly documented.\n- This property is typically used in administrative or audit contexts where full data visibility is required.\n**Dependency chain**\n- Dependent on the data source’s ability to distinguish active vs. inactive items.\n- May interact with other filtering or pagination parameters in the lookup API.\n**Technical details**\n- Data type: Boolean.\n- Default value: false.\n- Typically implemented as a query parameter or request body field in lookup API calls.\n- Requires backend support to filter data based on active status flags or timestamps."},"_id":{"type":"object","description":"_id: The unique identifier for the lookup entry within the system. This identifier is typically a string or an ObjectId that uniquely distinguishes each lookup record from others in the database or dataset.\n**Field behavior**\n- Serves as the primary key for the lookup entry.\n- Must be unique across all lookup entries.\n- Immutable once assigned; should not be changed to maintain data integrity.\n- Used to reference the lookup entry in queries, updates, and deletions.\n**Implementation guidance**\n- Use a consistent format for the identifier, such as a UUID or database-generated ObjectId.\n- Ensure uniqueness by leveraging database constraints or application-level checks.\n- Assign the _id at the time of creation of the lookup entry.\n- Avoid exposing internal _id values directly to end-users unless necessary.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"123e4567-e89b-12d3-a456-426614174000\"\n- \"lookup_001\"\n**Important notes**\n- The _id field is critical for data integrity and should be handled with care.\n- Changing the _id after creation can lead to broken references and data inconsistencies.\n- In some systems, the _id may be automatically generated by the database.\n**Dependency chain**\n- May be referenced by other fields or entities that link to the lookup entry.\n- Dependent on the database or storage system's method of generating unique identifiers.\n**Technical details**\n- Typically stored as a string or a specialized ObjectId type depending on the database.\n- Indexed to optimize query performance.\n- Often immutable and enforced by database schema constraints."},"default":{"type":["object","null"],"description":"A boolean property indicating whether the lookup is the default selection among multiple lookup options.\n\n**Field behavior**\n- Determines if this lookup is automatically selected or used by default in the absence of user input.\n- Only one lookup should be marked as default within a given context to avoid ambiguity.\n- Influences the initial state or value presented to the user or system when multiple lookups are available.\n\n**Implementation guidance**\n- Set to `true` for the lookup that should be the default choice; all others should be `false` or omitted.\n- Validate that only one lookup per group or context has `default` set to `true`.\n- Use this property to pre-populate fields or guide user selections in UI or processing logic.\n\n**Examples**\n- `default: true` — This lookup is the default selection.\n- `default: false` — This lookup is not the default.\n- Omitted `default` property implies the lookup is not the default.\n\n**Important notes**\n- If multiple lookups have `default` set to `true`, the system behavior may be unpredictable.\n- The absence of a `default` lookup means no automatic selection; user input or additional logic is required.\n- This property is typically optional but recommended for clarity in multi-lookup scenarios.\n\n**Dependency chain**\n- Related to the `lookups` array or collection where multiple lookup options exist.\n- May influence UI components or backend logic that rely on default selections.\n\n**Technical details**\n- Data type: Boolean (`true` or `false`).\n- Default value: Typically `false` if omitted.\n- Should be included at the same level as other lookup properties within the `lookups` object."}},"description":"A comprehensive collection of lookup tables or reference data sets designed to support and enhance the main data model or application logic. These lookup tables provide predefined, authoritative sets of values that can be consistently referenced throughout the system to ensure data uniformity, minimize redundancy, and facilitate robust data validation and user interface consistency. They serve as centralized repositories for static or infrequently changing data that underpin dynamic application behavior, such as dropdown options, status codes, categorizations, and mappings. By centralizing this reference data, the system promotes maintainability, reduces errors, and enables seamless integration across different modules and services. Lookup tables may also support complex data structures, localization, versioning, and hierarchical relationships to accommodate diverse application requirements and evolving business rules.\n\n**Field behavior**\n- Contains multiple distinct lookup tables, each identified by a unique name representing a specific domain or category of reference data.\n- Each lookup table comprises structured entries, which may be simple key-value pairs or complex objects with multiple attributes, defining valid options, mappings, or metadata.\n- Lookup tables standardize inputs, support UI components like dropdowns and filters, enforce data integrity, and drive conditional logic across the application.\n- Typically static or updated infrequently, ensuring stability and consistency in dependent processes.\n- Supports localization or internationalization to provide values in multiple languages or regional formats.\n- May represent hierarchical or relational data structures within lookup tables to capture complex relationships.\n- Enables versioning to track changes over time and facilitate rollback, auditing, or backward compatibility.\n- Acts as a single source of truth for reference data, reducing duplication and discrepancies across systems.\n- Can be extended or customized to meet specific domain or organizational needs without impacting core application logic.\n\n**Implementation guidance**\n- Organize as a dictionary or map where each key corresponds to a lookup table name and the associated value contains the set of entries.\n- Implement efficient loading, caching, and retrieval mechanisms to optimize performance and minimize latency.\n- Provide controlled update or refresh capabilities to allow seamless modifications without disrupting ongoing application operations.\n- Enforce strict validation rules on lookup entries to prevent invalid, inconsistent, or duplicate data.\n- Incorporate support for localization and internationalization where applicable, enabling multi-language or region-specific values.\n- Maintain versioning or metadata to track changes and support backward compatibility.\n- Consider access control policies if lookup data includes sensitive or restricted information.\n- Design APIs or interfaces for easy querying, filtering, and retrieval of lookup data by client applications.\n- Ensure synchronization mechanisms are in place"},"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"Specifies the type of operation to perform on NetSuite records. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create a new record. Use when importing new records only.\n- \"update\" — Update an existing record. Requires internalIdLookup to find the record.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. This is the most common operation.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete a record. Requires internalIdLookup to find the record.\n\n**Implementation guidance**\n- Value is automatically lowercased by the API.\n- For \"addupdate\", \"update\", and \"delete\", you MUST also set internalIdLookup to specify how to find existing records.\n- For \"add\" with ignoreExisting, set internalIdLookup to check for duplicates before creating.\n- Default to \"addupdate\" when the user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when the user says \"create\" or \"insert\" without mentioning updates.\n\n**Examples**\n- \"addupdate\" — for syncing records (create or update)\n- \"add\" — for creating new records only\n- \"update\" — for updating existing records only\n- \"delete\" — for removing records\n\n**Important notes**\n- This field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\n- Do NOT wrap it in an object like {\"type\": \"addupdate\"} — that will cause a validation error."},"isFileProvider":{"type":"boolean","description":"isFileProvider indicates whether the NetSuite integration functions as a file provider, enabling comprehensive capabilities to access, manage, and manipulate files within the NetSuite environment. When enabled, this property grants the integration full file-related operations such as browsing directories, uploading new files, downloading existing files, updating file metadata, and deleting files directly through the integration interface. This functionality is essential for workflows that require seamless, automated interaction with NetSuite's file storage system, facilitating efficient document handling, version control, and integration with external file management processes. It also impacts the availability of specific API endpoints and user interface components related to file management, and may influence system performance and API rate limits due to the volume of file operations.\n\n**Field behavior**\n- When set to true, activates full file management features including browsing, uploading, downloading, updating, and deleting files within NetSuite.\n- When false or omitted, disables all file provider functionalities, preventing any file-related operations through the integration.\n- Acts as a toggle controlling the availability of file-centric capabilities and related API endpoints.\n- Changes typically require re-authentication or reinitialization to apply updated file access permissions correctly.\n- May affect integration performance and API rate limits due to potentially high file operation volumes.\n\n**Implementation guidance**\n- Enable only if direct, comprehensive interaction with NetSuite files is required.\n- Ensure the integration has appropriate NetSuite permissions, roles, and OAuth scopes for secure file access and manipulation.\n- Validate strictly as a boolean to prevent misconfiguration.\n- Assess security implications thoroughly, enforcing strict access controls and audit logging when enabled.\n- Monitor API usage and system performance closely, as file operations can increase load and impact rate limits.\n- Coordinate with NetSuite administrators to align file access policies with organizational security and compliance standards.\n- Consider effects on middleware and downstream systems handling file transfers, error handling, and synchronization.\n\n**Examples**\n- `isFileProvider: true` — Enables full file provider capabilities, allowing browsing, uploading, downloading, and managing files within NetSuite.\n- `isFileProvider: false` — Disables all file-related functionalities, restricting the integration to non-file operations only.\n\n**Important notes**\n- Enabling file provider functionality may require additional NetSuite OAuth scopes, roles, or permissions specifically related to file access.\n- File operations can significantly impact API rate limits and quotas; plan and monitor usage accordingly.\n- This property exclusively controls file management capabilities and does not affect other integration functionalities."},"customFieldMetadata":{"type":"object","description":"Metadata information related to custom fields defined within the NetSuite environment, offering a comprehensive and detailed overview of each custom field's attributes, configuration settings, and operational parameters. This includes critical details such as the field type (e.g., checkbox, select, date, text), label, internal ID, source lists or records for select-type fields, default values, validation rules, display properties, and behavioral flags like mandatory, read-only, or hidden status. The metadata acts as an authoritative reference for accurately managing, validating, and utilizing custom fields during integrations, data processing, and API interactions, ensuring that custom field data is interpreted and handled precisely according to its defined structure, constraints, and business logic. It also enables dynamic adaptation to changes in custom field configurations, supporting robust and flexible integration workflows that maintain data integrity and enforce business rules effectively.\n\n**Field behavior**\n- Provides exhaustive metadata for each custom field, including type, label, internal ID, source references, default values, validation criteria, and display properties.\n- Reflects the current and authoritative configuration and constraints of custom fields within the NetSuite account.\n- Enables dynamic validation, processing, and rendering of custom fields during data import, export, and API operations.\n- Supports understanding of field dependencies, mandatory status, read-only or hidden flags, and other behavioral characteristics.\n- Facilitates enforcement of business rules and data integrity through validation and conditional logic based on metadata.\n\n**Implementation guidance**\n- Ensure this property is consistently populated with accurate, up-to-date, and comprehensive metadata for all relevant custom fields in the integration context.\n- Regularly synchronize and refresh metadata with the NetSuite environment to capture any additions, modifications, or deletions of custom fields.\n- Leverage this metadata to validate data payloads before transmission and to correctly interpret and process received data.\n- Use metadata details to implement field-specific logic, such as enforcing mandatory fields, applying validation rules, handling default values, and respecting read-only or hidden statuses.\n- Consider caching metadata where appropriate to optimize performance, while ensuring mechanisms exist to detect and apply updates promptly.\n\n**Examples**\n- Field type: \"checkbox\", label: \"Is Active\", id: \"custentity_is_active\", mandatory: true\n- Field type: \"select\", label: \"Customer Type\", id: \"custentity_customer_type\", sourceList: \"customerTypes\", readOnly: false\n- Field type: \"date\", label: \"Contract Start Date\", id: \"custentity_contract_start"},"recordType":{"type":"string","description":"recordType specifies the exact type of record within the NetSuite system that the API operation will interact with. This property is critical as it defines the schema, fields, validation rules, workflows, and behaviors applicable to the record being accessed, created, updated, or deleted. By accurately specifying the recordType, the API can correctly interpret the structure of the data payload, enforce appropriate business logic, and route the request to the correct processing module within NetSuite. This ensures precise data manipulation, retrieval, and integration aligned with the intended record context. Proper use of recordType enables seamless interaction with both standard and custom NetSuite records, supporting robust and flexible API operations across diverse business scenarios.\n\n**Field behavior**\n- Defines the specific NetSuite record type for the API request (e.g., customer, salesOrder, invoice).\n- Directly influences validation rules, available fields, permissible operations, and workflows in the request or response.\n- Determines the context, permissions, and business logic applicable to the record.\n- Must be set accurately and consistently to ensure correct processing and avoid errors.\n- Affects how the API interprets, validates, and processes the associated data payload.\n- Supports interaction with both standard and custom record types configured in the NetSuite account.\n- Enables the API to dynamically adapt to different record structures and business processes based on the recordType.\n\n**Implementation guidance**\n- Use the exact NetSuite internal record type identifiers as defined in official NetSuite documentation and account-specific customizations.\n- Validate the recordType value against the list of supported record types before processing the request to prevent errors.\n- Ensure compatibility of the recordType with the specific API endpoint, operation, and NetSuite account configuration.\n- Implement robust error handling for missing, invalid, or unsupported recordType values, providing clear and actionable feedback.\n- Regularly update the recordType list to reflect changes in NetSuite releases, custom record definitions, and account-specific configurations.\n- Consider role-based access and feature enablement that may affect the availability or behavior of certain record types.\n- When dealing with custom records, verify that the recordType matches the custom record’s internal ID and that the API user has appropriate permissions.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"employee\"\n- \"itemFulfillment\"\n- \"vendorBill\"\n- \"purchaseOrder\"\n- \"customRecordType123\" (example of a custom record)\n\n**Important notes**\n- The recordType value is case"},"recordTypeId":{"type":"string","description":"recordTypeId is the unique identifier that specifies the exact type or category of a record within the NetSuite system. It defines the structure, fields, validation rules, workflows, and behavior applicable to the record instance, ensuring that API operations target the correct record schema and processing logic. This identifier is essential for routing requests to the appropriate record handlers, enforcing data integrity, and validating the data according to the specific record type's requirements. Once set for a record instance, the recordTypeId is immutable, as changing it would imply a fundamentally different record type and could lead to inconsistencies or errors in processing.\n\n**Field behavior**\n- Specifies the precise category or type of NetSuite record being accessed or manipulated.\n- Determines the applicable schema, fields, validation rules, workflows, and business logic for the record.\n- Directs API requests to the correct processing logic, endpoint, and validation routines based on record type.\n- Immutable for a given record instance; changing it indicates a different record type and is not permitted.\n- Influences permissions, access control, and integration behaviors tied to the record type.\n- Affects how related records and dependencies are handled within the system.\n\n**Implementation guidance**\n- Use the exact internal string ID or numeric identifier as defined by NetSuite for record types.\n- Validate the recordTypeId against the current list of supported, enabled, and custom NetSuite record types before processing.\n- Ensure consistency and compatibility between recordTypeId and related fields or data structures within the payload.\n- Implement robust error handling for invalid, unsupported, deprecated, or mismatched recordTypeId values.\n- Keep the recordTypeId definitions synchronized with updates, customizations, and configurations from the NetSuite account to avoid mismatches.\n- Consider case sensitivity and formatting rules as dictated by the NetSuite API specifications.\n- When integrating with custom record types, verify that custom IDs conform to naming conventions and are properly registered in the NetSuite environment.\n\n**Examples**\n- \"customer\" — representing a standard customer record type.\n- \"salesOrder\" — representing a sales order record type.\n- \"invoice\" — representing an invoice record type.\n- Numeric identifiers such as \"123\" if the system uses numeric codes for record types.\n- Custom record type IDs defined within a specific NetSuite account, e.g., \"customRecordType_456\".\n- \"employee\" — representing an employee record type.\n- \"purchaseOrder\" — representing a purchase order record type.\n\n**Important notes**"},"retryUpdateAsAdd":{"type":"boolean","description":"A boolean flag that controls whether failed update operations in NetSuite integrations should be automatically retried as add operations. When set to true, if an update attempt fails—commonly because the target record does not exist—the system will attempt to add the record instead. This mechanism helps maintain data synchronization by addressing scenarios where records may have been deleted, never created, or are otherwise missing, thereby reducing manual intervention and minimizing data discrepancies. It ensures smoother integration workflows by providing a fallback strategy specifically for update failures, enhancing resilience and data consistency.\n\n**Field behavior**\n- Governs retry logic exclusively for update operations within NetSuite integrations.\n- When true, triggers an automatic retry as an add operation upon update failure.\n- When false or unset, update failures result in immediate error responses without retries.\n- Facilitates handling of missing or deleted records to improve synchronization accuracy.\n- Does not influence initial add operations or other API call types.\n- Activates retry logic only after detecting a failed update attempt.\n\n**Implementation guidance**\n- Enable this flag when there is a possibility that update requests target non-existent records.\n- Evaluate the potential for duplicate record creation if the add operation is not idempotent.\n- Implement comprehensive error handling and logging to monitor retry attempts and outcomes.\n- Conduct thorough testing in staging or development environments before deploying to production.\n- Coordinate with other retry or error recovery configurations to avoid conflicting behaviors.\n- Ensure accurate detection of update failure causes to apply retries appropriately and avoid unintended consequences.\n\n**Examples**\n- `retryUpdateAsAdd: true` — Automatically retries adding the record if an update fails due to a missing record.\n- `retryUpdateAsAdd: false` — Update failures are reported immediately without retrying as add operations.\n\n**Important notes**\n- Enabling this flag may lead to duplicate records if update failures occur for reasons other than missing records.\n- Use with caution in environments requiring strict data integrity, auditability, and compliance.\n- This flag only modifies retry behavior after an update failure; it does not alter the initial add or update request logic.\n- Accurate failure cause analysis is critical to ensure retries are applied correctly and safely.\n\n**Dependency chain**\n- Relies on execution of update operations against NetSuite records.\n- Works in conjunction with error detection and handling mechanisms to identify failure reasons.\n- Commonly used alongside other retry policies or error recovery settings to enhance integration robustness.\n\n**Technical details**\n- Data type: boolean\n- Default value: false"},"batchSize":{"type":"number","description":"Specifies the number of records to process in a single batch operation when interacting with the NetSuite API, directly influencing the efficiency, performance, and resource utilization of data processing tasks. This parameter controls how many records are sent or retrieved in one API call, balancing throughput with system constraints such as memory usage, network latency, and API rate limits. Proper configuration of batchSize is essential to optimize processing speed while minimizing the risk of errors, timeouts, or throttling by the NetSuite API. Adjusting batchSize allows for fine-tuning of integration workflows to accommodate varying workloads, system capabilities, and API restrictions, ensuring reliable and efficient data synchronization.  \n**Field behavior:**  \n- Determines the count of records included in each batch request to the NetSuite API.  \n- Directly impacts the number of API calls required to complete processing of all records.  \n- Larger batch sizes can improve throughput by reducing the number of calls but may increase memory consumption, processing latency, and risk of timeouts per call.  \n- Smaller batch sizes reduce memory footprint and improve responsiveness but may increase the total number of API calls and associated overhead.  \n- Influences error handling complexity, as failures in larger batches may require more extensive retries or partial processing logic.  \n- Affects how quickly data is processed and synchronized between systems.  \n**Implementation** GUIDANCE:  \n- Select a batch size that balances optimal performance with available system resources, network conditions, and API constraints.  \n- Conduct thorough performance testing with varying batch sizes to identify the most efficient configuration for your specific workload and environment.  \n- Ensure the batch size complies with NetSuite API limits, including maximum records per request and rate limiting policies.  \n- Implement robust error handling to manage partial failures within batches, including retry mechanisms, logging, and potential batch splitting.  \n- Consider the complexity, size, and processing time of individual records when determining batch size, as more complex or larger records may require smaller batches.  \n- Monitor runtime metrics and adjust batchSize dynamically if possible, based on system load, API response times, and error rates.  \n**Examples:**  \n- 100 (process 100 records per batch for balanced throughput and resource use)  \n- 500 (process 500 records per batch to maximize throughput in high-capacity environments)  \n- 50 (smaller batch size suitable for environments with limited memory, stricter API limits, or higher error sensitivity)  \n**Important notes:**  \n- The batchSize setting"},"internalIdLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Extract the relevant internal ID from the provided NetSuite data response to uniquely identify and reference specific records for subsequent API operations. This involves accurately parsing the response data—whether in JSON, XML, or other formats—to isolate the internal ID, which serves as a critical key for actions such as updates, deletions, or detailed queries within NetSuite. The extraction process must be precise, resilient to variations in response structures, and capable of handling nested or complex data formats to ensure reliable downstream processing and prevent errors in record manipulation.  \n**Field behavior:**  \n- Extracts a unique internal ID value from NetSuite API responses or data objects.  \n- Acts as the primary identifier for referencing and manipulating NetSuite records.  \n- Typically invoked after lookup, search, or query operations to obtain the necessary ID for further API calls.  \n- Supports handling of multiple data formats including JSON, XML, and potentially others.  \n- Ensures the extracted ID is valid, correctly formatted, and usable for subsequent operations.  \n- Handles nested or complex response structures to reliably locate the internal ID.  \n- Operates as a read-only field derived exclusively from API response data.  \n\n**Implementation guidance:**  \n- Implement robust and flexible parsing logic tailored to the expected NetSuite response formats.  \n- Validate the extracted internal ID against expected patterns (e.g., numeric strings) to ensure correctness.  \n- Incorporate comprehensive error handling for cases where the internal ID is missing, malformed, or unexpected.  \n- Design the extraction mechanism to be configurable and adaptable to changes in NetSuite API response schemas or record types.  \n- Support asynchronous processing if API responses are received asynchronously or in streaming formats.  \n- Log extraction attempts and failures to facilitate debugging and maintenance.  \n- Regularly update extraction logic to accommodate API version changes or schema updates.  \n\n**Examples:**  \n- Extracting `\"internalId\": \"12345\"` from a JSON response such as `{ \"internalId\": \"12345\", \"name\": \"Sample Record\" }`.  \n- Parsing nested XML elements or attributes to retrieve the internalId value, e.g., `<record internalId=\"12345\">`.  \n- Using the extracted internal ID to perform update, delete, or detailed fetch operations on a NetSuite record.  \n- Handling cases where the internal ID is embedded within deeply nested or complex response objects.  \n\n**Important notes:**  \n- The internal ID uniquely identifies records within NetSuite and is essential for accurate and reliable API"},"searchField":{"type":"string","description":"The specific field within a NetSuite record that serves as the key attribute for performing an internal ID lookup search. This field determines which property of the record will be queried to locate and retrieve the corresponding internal ID, thereby directly influencing the precision, relevance, and efficiency of the lookup operation. Selecting an appropriate searchField is critical for ensuring accurate matches and optimal performance, as it typically should be a unique, indexed, or otherwise optimized attribute within the record schema. The choice of searchField impacts not only the speed of the search but also the uniqueness and reliability of the results returned, making it essential to align with the record type, user permissions, and any customizations present in the NetSuite environment. Proper selection of this field helps avoid ambiguous results and enhances the overall robustness of the lookup process.\n\n**Field behavior**\n- Identifies the exact attribute of the NetSuite record to be searched.\n- Directs the lookup process to compare the provided search value against this field.\n- Affects the accuracy, relevance, and uniqueness of the internal ID retrieved.\n- Commonly corresponds to fields that are indexed or uniquely constrained for efficient querying.\n- Determines the scope and filtering of the search within the record type.\n- Influences how the system handles multiple matches or ambiguous results.\n- Must correspond to a searchable field that the user has permission to access.\n\n**Implementation guidance**\n- Use only valid, supported field names as defined in the NetSuite record schema.\n- Prefer fields that are indexed or optimized for search to enhance lookup speed.\n- Verify the existence and searchability of the field on the specific record type being queried.\n- Choose fields that minimize duplicates to avoid ambiguous or multiple results.\n- Ensure exact case sensitivity and spelling alignment with NetSuite’s API definitions.\n- Test the field’s searchability considering user permissions, record configurations, and any customizations.\n- Avoid fields that are write-only, restricted, or non-searchable due to NetSuite permissions or settings.\n- Consider the impact of custom fields or scripts that may alter standard field behavior.\n- Validate that the field supports the type of search operation intended (e.g., exact match, partial match).\n\n**Examples**\n- \"externalId\" — to locate records by an externally assigned identifier.\n- \"email\" — to find records using an email address attribute.\n- \"name\" — to search by the record’s name or title field.\n- \"tranId\" — to query transaction records by their transaction ID.\n- \"entityId"},"expression":{"type":"string","description":"A string representing a search expression used to filter or query records within the NetSuite system based on specific criteria. This expression defines the precise conditions that records must meet to be included in the search results, enabling targeted and efficient data retrieval. It supports a rich syntax including logical operators (AND, OR, NOT), comparison operators (=, !=, >, <, >=, <=), field references, and literal values. Expressions can be nested to form complex queries tailored to specific business needs, allowing for granular control over the search logic. This property is essential for performing internal lookups by matching records against the defined criteria, ensuring that only relevant records are returned.\n\n**Field behavior**\n- Defines the criteria for searching or filtering records by specifying one or more conditions.\n- Supports logical operators such as AND, OR, and NOT to combine multiple conditions logically.\n- Allows inclusion of field names, comparison operators, and literal values to construct meaningful expressions.\n- Enables nested expressions for advanced querying capabilities.\n- Used internally to perform lookups by matching records against the defined expression.\n- Returns records that satisfy all specified conditions within the expression.\n- Evaluates expressions dynamically at runtime to reflect current data states.\n- Supports referencing related record fields to enable cross-record filtering.\n\n**Implementation guidance**\n- Ensure the expression syntax strictly conforms to NetSuite’s search expression format and supported operators.\n- Validate the expression prior to execution to catch syntax errors and prevent runtime failures.\n- Use parameterized values or safe input handling to mitigate injection risks and enhance security.\n- Support and correctly parse nested expressions to handle complex query scenarios.\n- Implement graceful handling for cases where the expression yields no matching records, avoiding errors.\n- Optimize expressions for performance by limiting complexity and avoiding overly broad criteria.\n- Provide clear error messages or feedback when expressions are invalid or unsupported.\n- Consider caching frequently used expressions to improve lookup efficiency.\n\n**Examples**\n- `\"status = 'Open' AND type = 'Sales Order'\"`\n- `\"createdDate >= '2023-01-01' AND createdDate <= '2023-12-31'\"`\n- `\"customer.internalId = '12345' OR customer.email = 'example@example.com'\"`\n- `\"NOT (status = 'Closed' OR status = 'Cancelled')\"`\n- `\"amount > 1000 AND (priority = 'High' OR priority = 'Urgent')\"`\n- `\"item.category = 'Electronics' AND quantity >= 10\"`\n- `\"lastModifiedDate >"}},"description":"internalIdLookup is a boolean property that specifies whether the lookup operation should utilize the unique internal ID assigned by NetSuite to an entity for record retrieval. When set to true, the system treats the provided identifier strictly as this immutable internal ID, enabling precise and unambiguous access to the exact record. This approach is essential in scenarios demanding high accuracy and reliability, as internal IDs are guaranteed to be unique within the NetSuite environment and remain consistent over time. If set to false or omitted, the lookup defaults to using external identifiers, display names, or other non-internal keys, which may be less precise and could lead to multiple matches or ambiguity in the results.\n\n**Field behavior**\n- Activates strict matching using NetSuite’s unique internal ID when true.\n- Defaults to external or display identifier matching when false or not specified.\n- Changes the lookup logic and matching criteria based on the flag’s value.\n- Ensures efficient, accurate, and unambiguous record retrieval by leveraging stable internal identifiers.\n\n**Implementation guidance**\n- Enable only when the exact internal ID of the target record is known, verified, and appropriate for the lookup.\n- Avoid setting to true if only external or display identifiers are available to prevent failed or incorrect lookups.\n- Confirm that the internal ID corresponds to the correct record type to avoid mismatches or errors.\n- Validate the format and existence of the internal ID before performing the lookup operation.\n- Implement comprehensive error handling to manage cases where the internal ID is invalid, missing, or does not correspond to any record.\n\n**Examples**\n- `internalIdLookup: true` — Retrieve a customer record by its NetSuite internal ID for precise identification.\n- `internalIdLookup: false` — Search for an inventory item using its external SKU or descriptive name.\n- Omitting `internalIdLookup` defaults to false, causing the system to perform lookups based on external or display identifiers.\n\n**Important notes**\n- Internal IDs are stable, unique, and system-generated identifiers within NetSuite, providing the most reliable reference for records.\n- Using internal IDs can improve lookup performance and reduce ambiguity compared to relying on external or display identifiers.\n- Incorrectly setting this flag to true with non-internal IDs will likely result in lookup failures or no matching records found.\n- This property is specific to NetSuite integrations and may not be applicable or recognized in other systems or contexts.\n- Ensure synchronization between the internal ID used and the expected record type to maintain data integrity and"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"ignoreReadOnlyFields specifies whether the system should bypass any attempts to modify fields designated as read-only during data processing or update operations. When set to true, the system silently ignores changes to these protected fields, preventing errors or exceptions that would otherwise occur if such modifications were attempted. This behavior facilitates smoother integration and more resilient data handling, especially in environments where partial updates are common or where read-only constraints are strictly enforced. Conversely, when set to false, the system will attempt to update all fields, including those marked as read-only, which may result in errors or exceptions if such modifications are not permitted.  \n**Field behavior:**  \n- Controls whether read-only fields are excluded from update or modification attempts.  \n- When true, modifications to read-only fields are skipped silently without raising errors or exceptions.  \n- When false, attempts to modify read-only fields may cause errors, exceptions, or transaction failures.  \n- Helps maintain system stability by preventing unauthorized or unsupported changes to protected data.  \n- Influences how partial updates and patch operations are processed in the presence of read-only constraints.  \n- Affects error reporting and feedback mechanisms during data validation and update workflows.  \n\n**Implementation guidance:**  \n- Enable this flag to enhance robustness and fault tolerance when integrating with APIs or systems enforcing read-only field restrictions.  \n- Use in scenarios where it is acceptable or expected that read-only fields remain unchanged during update operations.  \n- Ensure downstream business logic, validation rules, and data consistency checks accommodate the possibility that some fields may not be updated.  \n- Carefully evaluate the impact on data integrity, audit requirements, and compliance before enabling this behavior.  \n- Coordinate with audit, logging, and monitoring mechanisms to accurately capture which fields were updated, skipped, or ignored.  \n- Consider the implications for user feedback and error handling in client applications consuming the API.  \n\n**Examples:**  \n- ignoreReadOnlyFields: true — Update operations will silently skip read-only fields, avoiding errors and allowing partial updates to succeed.  \n- ignoreReadOnlyFields: false — Update operations will attempt to modify all fields, potentially triggering errors or exceptions if read-only fields are included.  \n- In a bulk update scenario, setting this to true prevents the entire batch from failing due to attempts to modify read-only fields.  \n- When performing a patch request, enabling this flag allows the system to apply only permissible changes without interruption.  \n\n**Important notes:**  \n- Setting this flag to true does not grant permission to"},"warningAsError":{"type":"boolean","description":"Indicates whether warnings encountered during processing should be escalated and treated as errors, causing the operation to fail immediately upon any warning detection. This setting enforces stricter validation and operational integrity by preventing processes from completing successfully if any warnings arise, thereby ensuring higher data quality, compliance standards, and system reliability. When enabled, it transforms non-critical warnings into blocking errors, which can halt workflows, trigger rollbacks, or initiate error handling routines. This behavior is particularly valuable in environments where data accuracy and regulatory adherence are paramount, but it may also increase failure rates and require more robust error management and user communication strategies.\n\n**Field behavior**\n- When set to true, any warning triggers an error state, halting or failing the operation immediately.\n- When set to false, warnings are logged or reported but do not interrupt the workflow or cause failure.\n- Influences validation, data processing, integration, and transactional workflows where warnings might otherwise be non-blocking.\n- Affects error reporting mechanisms and may alter the flow of retries, rollbacks, or aborts.\n- Can change the overall system behavior by elevating the severity of warnings to errors.\n- Impacts downstream processes by potentially preventing partial or inconsistent data states.\n\n**Implementation guidance**\n- Use to enforce strict data integrity and prevent continuation of processes with potential issues.\n- Assess the impact on user experience, as enabling this may increase failure rates and necessitate enhanced error handling and user notifications.\n- Ensure client applications and integrations are designed to handle errors resulting from escalated warnings gracefully.\n- Clearly document the default behavior when this flag is unset to avoid ambiguity for users and developers.\n- Coordinate with logging, monitoring, and alerting systems to provide clear diagnostics and traceability when warnings are treated as errors.\n- Consider providing configuration options at multiple levels (user, project, global) to allow flexible enforcement.\n- Test thoroughly in staging environments to understand the operational impact before enabling in production.\n\n**Examples**\n- warningAsError: true — Data import fails immediately if any warnings are detected during validation, preventing partial or potentially corrupt data ingestion.\n- warningAsError: false — Warnings during data validation are logged but do not stop the import process, allowing for continued operation despite minor issues.\n- warningAsError: true — Integration workflows abort on any warning to maintain strict compliance with regulatory or business rules.\n- warningAsError: false — Batch processing continues despite warnings, with issues flagged for later review.\n- warningAsError: true —"},"skipCustomMetadataRequests":{"type":"boolean","description":"Indicates whether to bypass requests for custom metadata during API operations, enabling optimized performance by reducing unnecessary data retrieval and processing when custom metadata is not required. This flag allows fine-tuning of the API's behavior to specific use cases by controlling the inclusion of potentially resource-intensive custom metadata. By skipping these requests, the system can achieve faster response times, lower bandwidth usage, and reduced computational overhead, especially beneficial in high-throughput or performance-sensitive scenarios. Conversely, including custom metadata ensures comprehensive data context, which is critical for operations requiring full validation, auditing, or enrichment.  \n**Field behavior:**  \n- When set to true, the system omits fetching and processing any custom metadata associated with the request, improving efficiency and reducing payload size.  \n- When set to false or left unspecified, the system retrieves and processes all relevant custom metadata, ensuring complete data context.  \n- Directly influences the amount of data transmitted and processed, impacting API responsiveness and resource utilization.  \n- Affects downstream workflows that depend on the presence or absence of custom metadata for decision-making or data integrity.  \n**Implementation guidance:**  \n- Employ this flag to optimize performance in scenarios where custom metadata is unnecessary, such as bulk data exports, read-only queries, or environments with stable metadata.  \n- Assess the potential impact on data completeness, validation, auditing, and enrichment to avoid compromising critical business logic.  \n- Clearly communicate the default behavior when the property is omitted to prevent misunderstandings among API consumers.  \n- Enforce boolean validation to maintain consistent and predictable API behavior.  \n- Consider how this setting interacts with caching mechanisms, data synchronization, and other metadata-related preferences to ensure coherent system behavior.  \n**Examples:**  \n- `true`: Skip all custom metadata requests to expedite processing and minimize resource consumption in performance-critical workflows.  \n- `false`: Include custom metadata requests to obtain full metadata details necessary for thorough data handling and validation.  \n**Important notes:**  \n- Skipping custom metadata may lead to incomplete data contexts if downstream processes rely on that metadata for essential functions like validation or auditing.  \n- This setting is most effective when metadata is static, infrequently changed, or irrelevant to the current operation.  \n- Altering this flag can influence caching strategies, data consistency, and synchronization across distributed systems.  \n- Ensure all relevant stakeholders understand the implications of toggling this flag to maintain data integrity and operational correctness.  \n**Dependency chain:**  \n- Relies on the existence and"}},"description":"An object encapsulating the user's customizable settings and options within the NetSuite environment, enabling a highly personalized and efficient user experience. This object encompasses a broad spectrum of configurable preferences including interface layout, language, timezone, notification methods, default currencies, date and time formats, themes, and other user-specific options that directly influence how the system behaves, appears, and interacts with the individual user. Preferences are designed to be flexible, supporting inheritance from role-based or system-wide defaults while allowing users to override settings to suit their unique workflows and requirements. The preferences object supports dynamic retrieval and partial updates, ensuring that changes can be made granularly without affecting unrelated settings, thereby maintaining data integrity and user experience consistency.\n\n**Field behavior**\n- Contains user-specific configuration settings that tailor the NetSuite experience to individual needs and roles.\n- Includes preferences related to UI layout, notification settings, default currencies, date/time formats, language, themes, and other personalization options.\n- Supports dynamic retrieval and partial updates, enabling users to modify individual preferences without overwriting the entire object.\n- Allows inheritance of preferences from role-based or system-wide defaults when user-specific settings are not explicitly defined.\n- Changes to preferences immediately affect the user interface, notification delivery, data presentation, and overall system behavior.\n- Preferences persist across sessions and devices, ensuring a consistent user experience.\n- Supports both simple scalar values and complex nested structures to accommodate diverse configuration needs.\n\n**Implementation guidance**\n- Structure as a nested JSON object with clearly defined and documented sub-properties grouped by categories such as notifications, display settings, localization, and system defaults.\n- Validate all input during updates to ensure data integrity, prevent invalid configurations, and maintain system stability.\n- Implement partial update mechanisms (e.g., PATCH semantics) to allow granular and efficient modifications.\n- Enforce strict access controls to ensure only authorized users can view or modify preferences.\n- Consider versioning the preferences schema to support backward compatibility and future enhancements.\n- Provide comprehensive documentation for each sub-property to facilitate correct usage, integration, and maintenance.\n- Optimize retrieval and update operations for performance, especially in environments with large user bases.\n- Ensure compatibility with role-based access controls and system-wide default settings to maintain coherent preference hierarchies.\n\n**Examples**\n- `{ \"language\": \"en-US\", \"timezone\": \"America/New_York\", \"currency\": \"USD\", \"notifications\": { \"email\": true, \"sms\": false } }`\n- `{ \"dashboardLayout\": \"compact\", \""},"file":{"type":"object","properties":{"name":{"type":"string","description":"The name of the file as it will be identified, stored, and managed within the NetSuite system. This filename serves as the primary unique identifier for the file within its designated folder or context, ensuring precise referencing, retrieval, and manipulation across various NetSuite modules, integrations, and user interfaces. It is critical that the filename is clear, descriptive, and relevant, as it is used not only for backend operations but also for display purposes in reports, dashboards, and file listings. The filename must strictly adhere to NetSuite’s naming conventions and restrictions to prevent conflicts, errors, or access issues, including limitations on character usage and length. Proper naming facilitates efficient file organization, version control, and seamless integration with external systems.\n\n**Field behavior**\n- Acts as the unique identifier for the file within its folder or storage context, preventing duplication.\n- Used extensively in file retrieval, linking, display, and management operations throughout NetSuite.\n- Must be unique within the folder to avoid overwriting or access conflicts.\n- Supports inclusion of standard file extensions to indicate file type and enable appropriate handling.\n- Case sensitivity may vary depending on the operating environment and integration specifics.\n- Represents only the filename without any directory or path information.\n- Changes to the filename after creation can impact existing references or integrations.\n\n**Implementation guidance**\n- Choose a concise, descriptive, and meaningful name to facilitate easy identification and management.\n- Avoid special characters, symbols, or reserved characters (e.g., \\ / : * ? \" < > |) disallowed by NetSuite’s file naming rules.\n- Ensure the filename length complies with NetSuite’s maximum character limit (typically 255 characters).\n- Always include the appropriate file extension (e.g., .pdf, .docx, .png) to clearly denote the file format.\n- Exclude any path separators, folder names, or hierarchical data—only the filename itself should be provided.\n- Adopt consistent naming conventions that support version control, categorization, or organizational standards.\n- Validate filenames programmatically where possible to enforce compliance and prevent errors.\n- Consider the impact of renaming files on existing workflows and references before making changes.\n\n**Examples**\n- \"invoice_2024_06.pdf\"\n- \"project_plan_v2.docx\"\n- \"company_logo.png\"\n- \"financial_report_Q1_2024.xlsx\"\n- \"user_manual_v1.3.pdf\"\n\n**Important notes**\n- The filename must not contain directory or path information; it strictly represents"},"fileType":{"type":"string","description":"The fileType property specifies the precise format or category of the file managed within the NetSuite file system, such as PDF, CSV, JPEG, or other supported types. This designation is essential because it directly affects how the system processes, displays, stores, and interacts with the file throughout its lifecycle. Correctly identifying the fileType ensures that files are rendered correctly in the user interface, processed through appropriate workflows, and subjected to any file-specific restrictions, permissions, or validations. It is typically required when uploading or creating new files to guarantee compatibility, prevent errors during file operations, and maintain data integrity. Additionally, the fileType influences system behaviors such as indexing, searchability, and integration with other NetSuite modules or external systems.\n\n**Field behavior**\n- Defines the file’s format or category, governing system handling, processing, and display.\n- Determines rendering, storage methods, and permissible manipulations within NetSuite.\n- Enables or restricts specific operations based on the file type’s inherent capabilities and limitations.\n- Usually mandatory during file creation or upload to ensure accurate system recognition and handling.\n- Influences validation checks, error handling, and workflow routing during file operations.\n- Affects integration points with other NetSuite features, such as reporting, scripting, or external API interactions.\n\n**Implementation guidance**\n- Validate the fileType against a predefined list of supported NetSuite file types to ensure compatibility and prevent errors.\n- Ensure the fileType accurately reflects the actual content format of the file to avoid processing failures or data corruption.\n- Prefer using standardized MIME types or NetSuite-specific identifiers where applicable for consistency and interoperability.\n- Implement robust error handling to gracefully manage unsupported, unrecognized, or mismatched file types.\n- Consider system versioning, configuration differences, and customizations that may affect supported file types.\n- When changing fileType post-creation, ensure appropriate reprocessing or format conversion to maintain data integrity.\n\n**Examples**\n- \"PDF\" for Portable Document Format files commonly used for documents.\n- \"CSV\" for Comma-Separated Values files frequently used for data exchange.\n- \"JPG\" or \"JPEG\" for image files in JPEG format.\n- \"PLAINTEXT\" for plain text files without formatting.\n- \"EXCEL\" for Microsoft Excel spreadsheet files.\n- \"XML\" for Extensible Markup Language files used in data interchange.\n- \"ZIP\" for compressed archive files.\n\n**Important notes**\n- Consistency between the fileType and the actual file content"},"folder":{"type":"string","description":"The identifier or path of the folder within the NetSuite file cabinet where the file is stored or intended to be stored. This property precisely defines the file's storage location, enabling organized management, categorization, and efficient retrieval within the NetSuite environment. It supports specification either as a unique numeric folder ID or as a string representing the folder path, accommodating both absolute and relative references depending on the API context. Proper assignment ensures files are correctly placed within the hierarchical folder structure, facilitating access control, streamlined file operations, and maintaining organizational consistency.\n\n**Field behavior**\n- Specifies the exact folder location for storing or moving the file within the NetSuite file cabinet.\n- Accepts either a numeric folder ID for unambiguous identification or a string folder path for hierarchical referencing.\n- Mandatory when uploading new files or relocating existing files to define their destination.\n- Optional during file metadata retrieval if folder context is implicit or not required.\n- Updating this property on an existing file triggers relocation to the specified folder.\n- Influences file visibility and access permissions based on folder-level security settings.\n- Supports both absolute and relative folder path formats depending on API usage context.\n\n**Implementation guidance**\n- Confirm the target folder exists and is accessible within the NetSuite file cabinet before assignment.\n- Prefer using the internal numeric folder ID to avoid ambiguity and ensure precise targeting.\n- Support both absolute and relative folder paths where the API context allows.\n- Enforce permission validation to verify that the user or integration has adequate rights to access or modify the folder.\n- Normalize folder paths to comply with NetSuite’s hierarchical structure and naming conventions.\n- Provide informative error responses if the folder is invalid, non-existent, or access is denied.\n- When moving files by updating this property, ensure that any dependent metadata or references are updated accordingly.\n\n**Examples**\n- \"123\" (numeric folder ID representing a specific folder)\n- \"/Documents/Invoices\" (string folder path indicating a nested folder structure)\n- \"456\" (numeric folder ID for a project-specific folder)\n- \"Shared/Marketing/Assets\" (relative folder path within the file cabinet hierarchy)\n\n**Important notes**\n- Folder IDs are unique within the NetSuite account and are the recommended method for precise folder referencing.\n- Folder paths must conform to NetSuite’s naming standards and reflect the correct folder hierarchy.\n- Changing the folder property on an existing file effectively moves the file within the file cabinet.\n- Adequate permissions are required to access or modify the specified folder;"},"folderInternalId":{"type":"string","description":"The internal identifier of the folder within the NetSuite file cabinet where the file is stored. This unique, system-generated integer ID precisely specifies the folder location, enabling accurate file operations such as upload, download, retrieval, and organization. It is essential for associating files with their respective folders and maintaining the hierarchical structure of the file cabinet, ensuring consistent file management and access control.\n\n**Field behavior**\n- Represents a unique, system-assigned internal ID for a folder in the NetSuite file cabinet.\n- Required to specify or retrieve the exact folder location during file management operations.\n- Must correspond to an existing folder within the NetSuite account to ensure valid file placement.\n- Changing this ID effectively moves the file to a different folder within the file cabinet.\n- Permissions on the folder influence access and operations on the associated files.\n- The field is mandatory when creating or updating a file to define its storage location.\n- Supports hierarchical folder structures by linking files to nested folders via their internal IDs.\n\n**Implementation guidance**\n- Validate that the folderInternalId is a valid integer and corresponds to an existing folder before use.\n- Retrieve valid folder internal IDs through NetSuite’s SuiteScript, REST API, or UI to avoid errors.\n- Implement error handling for cases where the folderInternalId is missing, invalid, or points to a restricted folder.\n- Update this ID carefully, as it changes the file’s folder location and may affect file accessibility.\n- Ensure that the user or integration has appropriate permissions to access or modify the target folder.\n- When moving files between folders, confirm that the destination folder supports the file type and intended use.\n- Cache or store folderInternalId values cautiously to prevent stale references in dynamic folder structures.\n\n**Examples**\n- 12345\n- 67890\n- 101112\n\n**Important notes**\n- This ID is distinct from the folder name; it is a system-generated unique identifier.\n- Modifying the folderInternalId moves the file to a different folder, impacting file organization.\n- Folder permissions and sharing settings can restrict or enable file operations based on this ID.\n- Accurate use of this ID is critical for maintaining the integrity of the file cabinet’s folder hierarchy.\n- The folderInternalId cannot be arbitrarily assigned; it must be obtained from NetSuite’s system.\n- Changes to folder structure or deletion of folders may invalidate existing folderInternalId references.\n\n**Dependency chain**\n- Depends on the existence and structure of folders within the NetSuite"},"internalId":{"type":"string","description":"Unique identifier assigned internally to a file within the NetSuite system, serving as the primary key for that file record. This identifier is essential for programmatic referencing and management of the file, enabling precise operations such as retrieval, update, or deletion. The internalId is immutable once assigned and guaranteed to be unique across all file records, ensuring accurate and consistent identification. It is automatically generated by NetSuite upon file creation and is consistently used across both REST and SOAP API endpoints to perform file-related actions. This identifier is critical for maintaining data integrity, enabling seamless integration, and supporting reliable file management workflows within the NetSuite ecosystem.\n\n**Field behavior**\n- Acts as the definitive and immutable identifier for a file within NetSuite.\n- Must be unique for each file record to prevent conflicts.\n- Required for all API operations targeting a specific file, including retrieval, updates, and deletions.\n- Cannot be manually assigned, duplicated, or reassigned to another file.\n- Remains constant throughout the lifecycle of the file record.\n\n**Implementation guidance**\n- Always retrieve the internalId from NetSuite responses after file creation or queries.\n- Use the internalId exclusively for all subsequent API calls involving the file.\n- Avoid any manual assignment or modification of the internalId to prevent inconsistencies.\n- Validate that the internalId conforms to NetSuite’s expected format before use in API calls.\n- Handle errors gracefully when an invalid or non-existent internalId is provided.\n\n**Examples**\n- \"12345\"\n- \"67890\"\n- \"1001\"\n\n**Important notes**\n- Providing an incorrect or non-existent internalId will result in API errors or failed operations.\n- The internalId is distinct and separate from the file name, file path, or any external identifiers.\n- It plays a critical role in maintaining data integrity and ensuring accurate file referencing within NetSuite.\n- The internalId cannot be reused or recycled for different files.\n\n**Dependency chain**\n- Requires the existence of the corresponding file record within NetSuite.\n- Utilized by API methods such as get, update, and delete for file management operations.\n- May be referenced by other NetSuite entities or records that link to the file.\n- Dependent on NetSuite’s internal database and indexing mechanisms for uniqueness and retrieval.\n\n**Technical details**\n- Represented as a string or numeric value depending on the API context.\n- Automatically assigned by NetSuite during the file creation process.\n- Stored as the primary key in the Net"},"backupFolderInternalId":{"type":"string","description":"backupFolderInternalId is the unique internal identifier assigned by NetSuite to a specific folder within the file cabinet designated exclusively for storing backup files. This identifier is essential for accurately locating, managing, and organizing backup data within the system’s hierarchical file storage structure. It ensures that all backup operations—such as file creation, modification, and retrieval—are precisely targeted to the correct folder, thereby facilitating reliable data preservation, streamlined backup workflows, and efficient recovery processes.  \n**Field behavior:**  \n- Uniquely identifies the backup folder within the NetSuite file cabinet, eliminating ambiguity in folder selection.  \n- Used to specify or retrieve the exact storage location for backup files during automated or manual backup operations.  \n- Typically assigned and referenced during backup configuration, file management tasks, or programmatic interactions with the NetSuite API.  \n- Enables seamless integration with automated backup processes by directing files to the intended folder without manual intervention.  \n**Implementation guidance:**  \n- Verify that the internal ID corresponds to an existing, active, and accessible folder within the NetSuite file cabinet before use.  \n- Ensure the designated folder has appropriate permissions set to allow creation, modification, and retrieval of backup files.  \n- Avoid using internal IDs of folders that are restricted, archived, deleted, or otherwise unsuitable for active backup storage to prevent errors.  \n- Maintain consistent use of this ID across all API calls, scripts, and backup configurations to uphold data integrity and organizational consistency.  \n**Examples:**  \n- \"12345\" (example of a valid internal folder ID)  \n- \"67890\"  \n- \"34567\"  \n**Important notes:**  \n- This identifier is internal to NetSuite and does not correspond to external folder names, display names, or file system paths.  \n- Changing this ID will redirect backup files to a different folder, which may disrupt existing backup workflows or data organization.  \n- Proper folder permissions are essential to avoid backup failures, access denials, or data loss.  \n- Once set for a given backup configuration, this ID should be treated as immutable unless a deliberate and well-documented change is required.  \n**Dependency chain:**  \n- Requires the target folder to exist within the NetSuite file cabinet and be properly configured for backup storage.  \n- Interacts closely with backup configuration settings, file management APIs, and NetSuite’s permission and security models.  \n- Dependent on NetSuite’s internal folder management system to maintain uniqueness and integrity of folder"}},"description":"Represents a comprehensive file object within the NetSuite system, serving as a fundamental entity for uploading, storing, managing, and referencing a wide variety of file types including documents, images, spreadsheets, audio, video, and other media formats. This object facilitates seamless integration and association of file data with NetSuite records, transactions, custom entities, and workflows, enabling efficient document management, retrieval, and automation across the platform. It supports detailed metadata management such as file name, type, size, folder location, internal identifiers, encoding formats, and timestamps, ensuring precise control, traceability, and auditability of files. The file object can represent both files stored internally in the NetSuite file Cabinet and externally referenced files via URLs or other means, providing flexibility in file handling and access. It enforces platform-specific constraints on file size, type, and permissions to maintain system integrity, security, and compliance with organizational policies. Additionally, it supports versioning and updates to existing files, allowing for effective file lifecycle management within NetSuite.\n\n**Field behavior**\n- Used to specify, upload, retrieve, update, or link files associated with NetSuite records, transactions, custom entities, or workflows.\n- Supports a broad spectrum of file types including PDFs, images (JPEG, PNG, GIF), spreadsheets (Excel, CSV), text documents, audio, video, and other media formats.\n- Can represent files stored internally within the NetSuite file Cabinet or externally referenced files via URLs or other means.\n- Enables attachment of files to records to provide enhanced context, documentation, and audit trails.\n- Manages comprehensive file metadata including internalId, name, fileType, size, folder location, encoding, and timestamps.\n- Supports versioning and updates to existing files where applicable.\n- Enforces platform-specific restrictions on file size, type, and access permissions.\n- Facilitates secure file access and sharing based on user roles and permissions.\n\n**Implementation guidance**\n- Include complete and accurate metadata such as internalId, name, fileType, size, folder, encoding, and timestamps to ensure reliable file referencing and management.\n- Validate file types and sizes against NetSuite’s platform restrictions prior to upload to prevent errors and ensure compliance.\n- Use appropriate encoding methods (e.g., base64) for file content during API transmission to maintain data integrity.\n- Employ multipart/form-data encoding for file uploads when required by the API specifications.\n- Ensure users have the necessary roles and permissions to upload, retrieve, or modify files"}}},"NetsuiteDistributed":{"type":"object","description":"Configuration for NetSuite Distributed (SuiteApp 2.0) import operations.\nThis is the primary sub-schema for NetSuiteDistributedImport adaptorType.\nThe API field name is \"netsuite_da\".\n\n**Key fields**\n- operation: REQUIRED — the NetSuite operation (add, update, addupdate, etc.)\n- recordType: REQUIRED — the NetSuite record type (e.g., \"customer\", \"salesOrder\")\n- mapping: field mappings from source data to NetSuite fields\n- internalIdLookup: how to find existing records for update/addupdate/delete\n- restletVersion: defaults to \"suiteapp2.0\", rarely needs to be changed\n","properties":{"operation":{"type":"string","enum":["add","update","addupdate","attach","detach","delete"],"description":"The NetSuite operation to perform. REQUIRED. This is a plain string value, NOT an object.\n\n**Valid values**\n- \"add\" — Create new records only.\n- \"update\" — Update existing records. Requires internalIdLookup.\n- \"addupdate\" — Upsert: update if found, create if not. Requires internalIdLookup. Most common.\n- \"attach\" — Attach a record to another record.\n- \"detach\" — Detach a record from another record.\n- \"delete\" — Delete records. Requires internalIdLookup.\n\n**Guidance**\n- Default to \"addupdate\" when user says \"sync\", \"upsert\", or \"create or update\".\n- Use \"add\" when user says \"create\" or \"insert\" without mentioning updates.\n- For \"update\", \"addupdate\", and \"delete\", you MUST also set internalIdLookup.\n\n**Important**\nThis field is a string, NOT an object. Set it directly: \"operation\": \"addupdate\"\nDo NOT send {\"type\": \"addupdate\"} — that causes a Cast error.\n"},"recordType":{"type":"string","description":"The NetSuite record type to import into. REQUIRED.\n\nMust match a valid NetSuite record type identifier.\n\n**Examples**\n- \"customer\"\n- \"salesOrder\"\n- \"invoice\"\n- \"purchaseOrder\"\n- \"vendorBill\"\n- \"itemFulfillment\"\n- \"employee\"\n- \"inventoryItem\"\n- \"customrecord_myrecord\" (custom record types)\n"},"recordIdentifier":{"type":"string","description":"Custom record identifier, used to identify the specific record type when\nimporting into custom record types.\n"},"recordTypeId":{"type":"string","description":"The internal record type ID. Used for custom record types in NetSuite where\nthe numeric ID is needed in addition to the recordType string.\n"},"restletVersion":{"type":"string","enum":["suitebundle","suiteapp1.0","suiteapp2.0"],"description":"The version of the NetSuite RESTlet to use. This is a plain STRING, NOT an object.\n\nDefaults to \"suiteapp2.0\" when useSS2Restlets is true, \"suitebundle\" otherwise.\nAlmost always \"suiteapp2.0\" for modern integrations — rarely needs to be explicitly set.\n\nIMPORTANT: Set as a plain string value, e.g. \"suiteapp2.0\"\nDo NOT send {\"type\": \"suiteapp2.0\"} — that causes a Cast error.\n"},"useSS2Restlets":{"type":"boolean","description":"Whether to use SuiteScript 2.0 RESTlets. Defaults to true for modern integrations.\nWhen true, restletVersion defaults to \"suiteapp2.0\".\n"},"missingOrCorruptedDAConfig":{"type":"boolean","description":"Flag indicating whether the Distributed Adaptor configuration is missing or corrupted.\nSet by the system — do not set manually.\n"},"batchSize":{"type":"number","description":"Number of records to process per batch. Controls how many records are sent\nto NetSuite in a single API call. Typical values: 50-200.\n"},"internalIdLookup":{"type":"object","description":"Configuration for looking up existing NetSuite records by internal ID.\nREQUIRED when operation is \"update\", \"addupdate\", or \"delete\".\n\nDefines how to find existing records in NetSuite to match against incoming data.\n","properties":{"extract":{"type":"string","description":"The path in the source record to extract the lookup value from.\nExample: \"internalId\" or \"externalId\"\n"},"searchField":{"type":"string","description":"The NetSuite field to search against.\nExample: \"externalId\", \"email\", \"name\", \"tranId\"\n"},"operator":{"type":"string","description":"The comparison operator for the lookup.\nExample: \"is\", \"contains\", \"startswith\"\n"},"expression":{"type":"string","description":"A NetSuite search expression for complex lookup conditions.\nUsed for multi-field or conditional lookups.\n"}}},"hooks":{"type":"object","description":"Script hooks for custom processing at different stages of the import.\nEach hook references a SuiteScript file and function.\n","properties":{"preMap":{"type":"object","description":"Runs before field mapping is applied.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postMap":{"type":"object","description":"Runs after field mapping, before submission to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}},"postSubmit":{"type":"object","description":"Runs after the record is submitted to NetSuite.","properties":{"fileInternalId":{"type":"string","description":"NetSuite internal ID of the SuiteScript file."},"function":{"type":"string","description":"Name of the function to execute."},"configuration":{"type":"object","description":"Configuration object passed to the hook function."}}}}},"mapping":{"type":"object","description":"Field mappings that define how source data fields map to NetSuite record fields.\nContains both body-level fields and sublist (line-item) fields.\n","properties":{"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the source record to extract the value from.\nUse dot notation for nested fields (e.g., \"address.city\").\n"},"generate":{"type":"string","description":"The NetSuite field ID to write the value to (e.g., \"companyname\", \"email\", \"subsidiary\").\n"},"hardCodedValue":{"type":"string","description":"A static value to always use instead of extracting from source data.\nMutually exclusive with extract.\n"},"lookupName":{"type":"string","description":"Reference to a lookup defined in netsuite_da.lookups by name."},"dataType":{"type":"string","description":"Data type hint for the field value (e.g., \"string\", \"number\", \"date\", \"boolean\").\n"},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"immutable":{"type":"boolean","description":"When true, this field is only set on record creation, not on updates."},"discardIfEmpty":{"type":"boolean","description":"When true, skip this field mapping if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value (e.g., \"MM/DD/YYYY\", \"ISO8601\").\nUsed to parse date strings from source data.\n"},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value (e.g., \"America/New_York\")."},"subRecordMapping":{"type":"object","description":"Nested mapping for NetSuite subrecord fields (e.g., address, inventory detail)."},"conditional":{"type":"object","description":"Conditional logic for when to apply this field mapping.\n","properties":{"lookupName":{"type":"string","description":"Lookup to evaluate for the condition."},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"],"description":"When to apply this mapping:\n- record_created: only on new records\n- record_updated: only on existing records\n- extract_not_empty: only when extracted value is non-empty\n- lookup_not_empty: only when lookup returns a value\n- lookup_empty: only when lookup returns no value\n- ignore_if_set: skip if field already has a value\n"}}}}},"description":"Body-level field mappings. Each entry maps a source field to a NetSuite body field.\n"},"lists":{"type":"array","items":{"type":"object","properties":{"generate":{"type":"string","description":"The NetSuite sublist ID (e.g., \"item\" for sales order line items,\n\"addressbook\" for address sublists).\n"},"jsonPath":{"type":"string","description":"JSON path in the source data that contains the array of sublist records.\n"},"fields":{"type":"array","items":{"type":"object","properties":{"extract":{"type":"string","description":"Path in the sublist record to extract the value from."},"generate":{"type":"string","description":"NetSuite sublist field ID to write to."},"hardCodedValue":{"type":"string","description":"Static value for this sublist field."},"lookupName":{"type":"string","description":"Reference to a lookup by name."},"dataType":{"type":"string","description":"Data type hint for the field value."},"internalId":{"type":"boolean","description":"Whether the value is a NetSuite internal ID reference."},"isKey":{"type":"boolean","description":"Whether this field is a key field for matching existing sublist lines."},"immutable":{"type":"boolean","description":"Only set on record creation, not updates."},"discardIfEmpty":{"type":"boolean","description":"Skip if the extracted value is empty."},"extractDateFormat":{"type":"string","description":"Date format of the extracted value."},"extractDateTimezone":{"type":"string","description":"Timezone of the extracted date value."},"subRecordMapping":{"type":"object","description":"Nested mapping for subrecord fields within the sublist."},"conditional":{"type":"object","properties":{"lookupName":{"type":"string"},"when":{"type":"string","enum":["record_created","record_updated","extract_not_empty","lookup_not_empty","lookup_empty","ignore_if_set"]}}}}},"description":"Field mappings for each column in the sublist."}}},"description":"Sublist (line-item) mappings. Each entry maps source data to a NetSuite sublist.\n"}}},"lookups":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique name for this lookup, referenced by lookupName in field mappings.\n"},"recordType":{"type":"string","description":"NetSuite record type to search (e.g., \"customer\", \"item\").\n"},"searchField":{"type":"string","description":"NetSuite field to search against (e.g., \"email\", \"externalId\", \"name\").\n"},"resultField":{"type":"string","description":"Field from the lookup result to return (e.g., \"internalId\").\n"},"expression":{"type":"string","description":"NetSuite search expression for complex lookup conditions.\n"},"operator":{"type":"string","description":"Comparison operator (e.g., \"is\", \"contains\", \"startswith\").\n"},"includeInactive":{"type":"boolean","description":"Whether to include inactive records in lookup results."},"useDefaultOnMultipleMatches":{"type":"boolean","description":"When true, use the default value if multiple records match the lookup."},"allowFailures":{"type":"boolean","description":"When true, lookup failures do not stop the import."},"map":{"type":"object","description":"Static value mapping for lookup results (key-value pairs)."},"default":{"type":"string","description":"Default value when lookup returns no results."}}},"description":"Lookup definitions used to resolve reference values from NetSuite.\nReferenced by name from field mappings via the lookupName property.\n"},"rawOverride":{"type":"object","description":"Raw override object for advanced use cases. When useRawOverride is true,\nthis object is sent directly to the NetSuite API, bypassing normal mapping.\n"},"useRawOverride":{"type":"boolean","description":"When true, uses rawOverride instead of the normal mapping configuration.\n"},"isMigrated":{"type":"boolean","description":"System flag indicating whether this import was migrated from a legacy format.\n"},"preferences":{"type":"object","properties":{"ignoreReadOnlyFields":{"type":"boolean","description":"When true, silently skip read-only fields instead of raising errors."},"warningAsError":{"type":"boolean","description":"When true, treat NetSuite warnings as errors that stop the import."},"skipCustomMetadataRequests":{"type":"boolean","description":"When true, skip fetching custom field metadata to improve performance."}},"description":"Import behavior preferences that control how NetSuite handles the import operation.\n"},"retryUpdateAsAdd":{"type":"boolean","description":"When true, if an update fails because the record doesn't exist, automatically\nretry as an add operation. Useful for initial syncs where records may not exist yet.\n"},"isFileProvider":{"type":"boolean","description":"Whether this import handles files in the NetSuite File Cabinet.\n"},"file":{"type":"object","description":"File cabinet configuration for file-based imports into NetSuite.\n","properties":{"name":{"type":"string","description":"Filename for the file in NetSuite File Cabinet."},"fileType":{"type":"string","description":"NetSuite file type (e.g., \"PDF\", \"CSV\", \"PLAINTEXT\", \"EXCEL\", \"XML\").\n"},"folder":{"type":"string","description":"Folder path or name in the NetSuite File Cabinet."},"folderInternalId":{"type":"string","description":"Internal ID of the target folder in NetSuite File Cabinet."},"internalId":{"type":"string","description":"Internal ID of an existing file to update."},"backupFolderInternalId":{"type":"string","description":"Internal ID of a backup folder for file versioning."}}},"customFieldMetadata":{"type":"object","description":"Metadata about custom fields on the target record type.\nPopulated by the system from NetSuite metadata — do not set manually.\n"}}},"Rdbms":{"type":"object","description":"Configuration for RDBMS import operations. Used for database imports into SQL Server, MySQL, PostgreSQL, Snowflake, Oracle, MariaDB, and other relational databases.\n\n**Query type determines which fields are required**\n\n| queryType          | Required fields                | Do NOT set        |\n|--------------------|--------------------------------|-------------------|\n| [\"per_record\"]     | query (array of SQL strings)   | bulkInsert        |\n| [\"per_page\"]       | query (array of SQL strings)   | bulkInsert        |\n| [\"bulk_insert\"]    | bulkInsert object              | query             |\n| [\"bulk_load\"]      | bulkLoad object                | query             |\n\n**Critical:** query IS AN ARRAY OF STRINGS\nThe query field must be an array of plain strings, NOT a single string and NOT an array of objects.\nCorrect: [\"MERGE INTO target USING ...\"]\nWrong: \"MERGE INTO target ...\"\nWrong: [{\"query\": \"MERGE INTO target ...\"}]\n","properties":{"lookups":{"type":["string","null"],"description":"A collection of predefined reference data sets or key-value pairs designed to standardize, validate, and streamline input values across the application. These lookup entries serve as a centralized repository of commonly used values, enabling consistent data entry, minimizing errors, and facilitating efficient data retrieval and processing. They are essential for populating user interface elements such as dropdown menus, autocomplete fields, and for enforcing validation rules within business logic. Lookup data can be static or dynamically updated, and may support localization to accommodate diverse user bases. Additionally, lookups often include metadata such as descriptions, effective dates, and status indicators to provide context and support lifecycle management. They play a critical role in maintaining data integrity, enhancing user experience, and ensuring interoperability across different system modules and external integrations.\n\n**Field behavior**\n- Contains structured sets of reference data including codes, labels, enumerations, or mappings.\n- Drives UI components by providing selectable options and autocomplete suggestions.\n- Ensures data consistency by standardizing input values across different modules.\n- Supports both static and dynamic updates to reflect changes in business requirements.\n- May include metadata like descriptions, effective dates, or status indicators for enhanced context.\n- Facilitates localization and internationalization to support multiple languages and regions.\n- Enables validation logic by restricting inputs to predefined acceptable values.\n- Supports versioning to track changes and maintain historical data integrity.\n\n**Implementation guidance**\n- Organize lookup data as dictionaries, arrays, or database tables for efficient access and management.\n- Implement caching strategies to optimize performance and reduce redundant data retrieval.\n- Enforce uniqueness and relevance of lookup entries within their specific contexts.\n- Provide robust mechanisms for updating, versioning, and extending lookup data without disrupting system stability.\n- Incorporate localization and internationalization support for user-facing lookup values.\n- Secure sensitive lookup data with appropriate access controls and auditing.\n- Design APIs or interfaces for easy retrieval and management of lookup data.\n- Ensure synchronization of lookup data across distributed systems or microservices.\n\n**Examples**\n- Country codes and names: {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n- Status codes representing entity states: {\"1\": \"Active\", \"2\": \"Inactive\", \"3\": \"Pending\"}\n- Product categories for e-commerce platforms: {\"ELEC\": \"Electronics\", \"FASH\": \"Fashion\", \"HOME\": \"Home & Garden\"}\n- Payment methods: {\"CC\": \"Credit Card\", \"PP\": \"PayPal\", \"BT\":"},"query":{"type":"array","items":{"type":"string"},"description":"Array of SQL query strings to execute. REQUIRED when queryType is [\"per_record\"], [\"per_page\"], or [\"first_page\"].\n\nEach element is a complete SQL statement as a plain string. Typically contains a single query.\n\nUse Handlebars to inject values from incoming records. RDBMS queries REQUIRE the `record.` prefix and triple braces `{{{ }}}`.\n\n**Format — array of strings**\nThis field is an ARRAY OF STRINGS, not a single string and not an array of objects.\n- CORRECT: [\"INSERT INTO users (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- WRONG: \"INSERT INTO users ...\"  (not an array)\n- WRONG: [{\"query\": \"INSERT INTO users ...\"}]  (objects are invalid — causes Cast error)\n- WRONG: [\"INSERT INTO users (name) VALUES ('{{name}}')\"]  (missing record. prefix — will produce empty values)\n\n**Critical:** RDBMS HANDLEBARS SYNTAX\n- MUST use `record.` prefix: `{{{record.fieldName}}}`, NOT `{{{fieldName}}}`\n- MUST use triple braces `{{{ }}}` for value references — in RDBMS context, `{{record.field}}` outputs `'value'` (wrapped in single quotes), while `{{{record.field}}}` outputs `value` (raw). Triple braces give you control over quoting in your SQL.\n- Block helpers (`{{#each}}`, `{{#if}}`, `{{/each}}`) use double braces as normal.\n- Nested fields: `{{{record.properties.email}}}`\n\n**Examples**\n- INSERT: [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n- UPDATE: [\"UPDATE inventory SET qty = {{{record.quantity}}} WHERE sku = '{{{record.sku}}}'\"]\n- UPSERT (MySQL): [\"INSERT INTO users (id, name) VALUES ({{{record.id}}}, '{{{record.name}}}') ON DUPLICATE KEY UPDATE name = '{{{record.name}}}'\"]\n- MERGE (Snowflake): [\"MERGE INTO target USING (SELECT '{{{record.email}}}' AS email) AS src ON target.email = src.email WHEN MATCHED THEN UPDATE SET name = '{{{record.name}}}' WHEN NOT MATCHED THEN INSERT (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n\n**String vs numeric values in handlebars**\n- Strings: wrap in single quotes — '{{{record.name}}}'\n- Numbers: no quotes — {{{record.quantity}}}\n"},"queryType":{"type":["array","null"],"items":{"type":"string","enum":["INSERT","UPDATE","bulk_insert","per_record","per_page","first_page","bulk_load"]},"description":"Specifies the execution strategy for the SQL operation.\n\n**CRITICAL: This field DETERMINES whether `query` or `bulkInsert` is required.**\n\n**Preference order (use the highest-performance option the operation supports)**\n\n1. **`[\"bulk_load\"]`** — fastest. Stages data as a file then loads via COPY/bulk mechanism. Supports upsert via `primaryKeys`, and custom merge/ignore logic via `overrideMergeQuery`. **Currently Snowflake and NSAW.**\n2. **`[\"bulk_insert\"]`** — batch INSERT via multi-row VALUES. No upsert/ignore logic. All RDBMS types.\n3. **`[\"per_page\"]`** — one SQL statement per page of records. All RDBMS types.\n4. **`[\"per_record\"]`** — one SQL statement per record. Full control over individual record SQL. All RDBMS types.\n\nAlways prefer the highest option that satisfies the operation. `bulk_load` handles INSERT, UPSERT, and custom merge logic. `bulk_insert` is best for pure INSERTs when `bulk_load` isn't available. `per_page` handles custom SQL at batch level. `per_record` gives full control over individual record SQL.\n\n**Decision tree (Follow in order)**\n\n**STEP 1: Does the database support bulk_load?**\n- If Snowflake, NSAW, or another database with bulk_load support:\n  → **USE `[\"bulk_load\"]`** with:\n    - `bulkLoad.tableName` (required)\n    - `bulkLoad.primaryKeys` for upsert/merge (auto-generates MERGE)\n    - `bulkLoad.overrideMergeQuery: true` for custom SQL (ignore-existing, conditional updates, multi-table ops). Override SQL references `{{import.rdbms.bulkLoad.preMergeTemporaryTable}}` for the staging table.\n    - No `primaryKeys` for pure INSERT (auto-generates INSERT)\n\n**STEP 2: Database does NOT support bulk_load**\n- **Pure INSERT**: → **USE `[\"bulk_insert\"]`** (requires `bulkInsert` object)\n- **UPDATE / UPSERT / custom matching logic**: → **USE `[\"per_page\"]`** (requires `query` field). Only use `[\"per_record\"]` if the logic truly cannot be expressed as a batch operation.\n\n**Critical relationship to other fields**\n\n| queryType        | REQUIRES         | DO NOT SET       |\n|------------------|------------------|------------------|\n| `[\"per_record\"]` | `query` field    | `bulkInsert`     |\n| `[\"bulk_insert\"]`| `bulkInsert` obj | `query`          |\n\n**Do not use**\n- `[\"INSERT\"]` or `[\"UPDATE\"]` as standalone values\n- Multiple types combined\n\n**Examples**\n\n**Insert with ignoreExisting (MUST use per_record)**\nPrompt: \"Import customers, ignore existing based on email\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"INSERT INTO customers (name, email) VALUES ('{{{record.name}}}', '{{{record.email}}}')\"]\n```\n\n**Pure insert (no duplicate checking)**\nPrompt: \"Insert all records into users table\"\n```json\n\"queryType\": [\"bulk_insert\"],\n\"bulkInsert\": { ... }\n```\n\n**Update Operation**\nPrompt: \"Update inventory counts\"\n```json\n\"queryType\": [\"per_record\"],\n\"query\": [\"UPDATE inventory SET qty = {{{record.qty}}} WHERE sku = '{{{record.sku}}}'\"]\n```\n\n**Run Once Per Page**\nPrompt: \"Execute this stored procedure for each page of results\"\n```json\n\"queryType\": [\"per_page\"]\n```\n**per_page uses a different Handlebars context than per_record.** The context is the entire batch (`batch_of_records`), not a single `record.` object. Loop through with `{{#each batch_of_records}}...{{{record.fieldName}}}...{{/each}}`.\n\n**Run Once at Start**\nPrompt: \"Run this cleanup script before processing\"\n```json\n\"queryType\": [\"first_page\"]\n```"},"bulkInsert":{"type":"object","description":"**CRITICAL: This object is ONLY used when `queryType` is `[\"bulk_insert\"]`.**\n\n✅ **SET THIS** when:\n- `queryType` is `[\"bulk_insert\"]`\n- Pure INSERT operation with NO ignoreExisting/duplicate checking\n\n❌ **DO NOT SET THIS** when:\n- `queryType` is `[\"per_record\"]` (use `query` field instead)\n- Operation involves UPDATE, UPSERT, or ignoreExisting logic\n\nIf the prompt mentions \"ignore existing\", \"skip duplicates\", or \"match on field\",\nuse `queryType: [\"per_record\"]` with the `query` field instead of this object.","properties":{"tableName":{"type":"string","description":"The name of the database table into which the bulk insert operation will be executed. This value must correspond to a valid, existing table within the target relational database management system (RDBMS). It serves as the primary destination for inserting multiple rows of data efficiently in a single operation. The table name can include schema or namespace qualifiers if supported by the database (e.g., \"schemaName.tableName\"), allowing precise targeting within complex database structures. Proper validation and sanitization of this value are essential to ensure the operation's success and to prevent SQL injection or other security vulnerabilities.\n\n**Field behavior**\n- Specifies the exact destination table for the bulk insert operation.\n- Must correspond to an existing table in the database schema.\n- Case sensitivity and naming conventions depend on the underlying RDBMS.\n- Supports schema-qualified names where applicable.\n- Used directly in the SQL `INSERT INTO` statement.\n- Influences how data is mapped and inserted during the bulk operation.\n\n**Implementation guidance**\n- Verify the existence and accessibility of the table in the target database before execution.\n- Ensure compliance with the RDBMS naming rules, including reserved keywords, allowed characters, and maximum length.\n- Support and correctly handle schema-qualified table names, respecting database-specific syntax.\n- Sanitize and validate input rigorously to prevent SQL injection and other security risks.\n- Apply appropriate quoting or escaping mechanisms based on the RDBMS (e.g., backticks for MySQL, double quotes for PostgreSQL).\n- Consider the impact of case sensitivity, especially when dealing with quoted identifiers.\n\n**Examples**\n- \"users\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"sales_data_2024\"\n- \"dbo.CustomerRecords\"\n\n**Important notes**\n- The target table must exist and have the appropriate schema and permissions to accept bulk inserts.\n- Incorrect, misspelled, or non-existent table names will cause the operation to fail.\n- Case sensitivity varies by database; for example, PostgreSQL treats quoted identifiers as case-sensitive.\n- Avoid using unvalidated dynamic input to mitigate security vulnerabilities.\n- Schema qualifiers should be used consistently to avoid ambiguity in multi-schema environments.\n\n**Dependency chain**\n- Relies on the database connection and authentication configuration.\n- Interacts with other bulkInsert parameters such as column mappings and data payload.\n- May be influenced by transaction management, locking mechanisms, and database constraints during the insert.\n- Dependent on the database's metadata for validation and existence checks.\n\n**Techn**"},"batchSize":{"type":"string","description":"The number of records to be inserted into the database in a single batch during a bulk insert operation. This parameter is crucial for optimizing the performance and efficiency of bulk data loading by controlling how many records are grouped together before being sent to the database. Proper tuning of batchSize balances memory consumption, transaction overhead, and throughput, enabling the system to handle large volumes of data efficiently without overwhelming resources or causing timeouts. Adjusting batchSize directly impacts transaction size, network utilization, error handling granularity, and recovery strategies, making it essential to tailor this value based on the specific database capabilities, system resources, and workload characteristics.\n\n**Field behavior**\n- Specifies the exact count of records processed and committed in a single batch during bulk insert operations.\n- Determines the frequency and size of database transactions, influencing overall throughput, latency, and system responsiveness.\n- Larger batch sizes can improve throughput but may increase memory usage, transaction duration, and risk of timeouts or locks.\n- Smaller batch sizes reduce memory footprint and transaction time but may increase the total number of transactions and associated overhead.\n- Defines the scope of error handling, as failures typically affect only the current batch, allowing for partial retries or rollbacks.\n- Controls the granularity of commit points, impacting rollback, recovery, and consistency strategies in case of failures.\n\n**Implementation guidance**\n- Choose a batch size that respects the database’s transaction limits, available system memory, and network conditions.\n- Conduct benchmarking and load testing with different batch sizes to identify the optimal balance for your environment.\n- Monitor system performance continuously and adjust batchSize dynamically if supported, to adapt to varying workloads.\n- Implement comprehensive error handling to manage partial batch failures, including retry logic or compensating transactions.\n- Verify compatibility with database drivers, ORM frameworks, or middleware, which may impose constraints or optimizations on batch sizes.\n- Consider the size, complexity, and serialization overhead of individual records, as larger or more complex records may necessitate smaller batches.\n- Factor in network latency and bandwidth to optimize data transfer efficiency and reduce potential bottlenecks.\n\n**Examples**\n- 1000: Suitable for moderate bulk insert operations, balancing speed and resource consumption effectively.\n- 50000: Ideal for high-throughput environments with ample memory and finely tuned database configurations.\n- 100: Appropriate for systems with limited memory or where minimizing transaction size and duration is critical.\n- 5000: A common default batch size providing a good compromise between performance and resource usage.\n\n**Important**"}}},"bulkLoad":{"type":"object","properties":{"tableName":{"type":"string","description":"The name of the target table within the relational database management system (RDBMS) where the bulk load operation will insert, update, or merge data. This property specifies the exact destination table for the bulk data operation and must correspond to a valid, existing table in the database schema. It can include schema qualifiers if supported by the database (e.g., schema.tableName), and must adhere to the naming conventions and case sensitivity rules of the target RDBMS. Proper specification of this property is critical to ensure data is loaded into the correct location without errors or unintended data modification. Accurate definition of the table name helps maintain data integrity and prevents operational failures during bulk load processes.\n\n**Field behavior**\n- Identifies the specific table in the database to receive the bulk-loaded data.\n- Must reference an existing table accessible with the current database credentials.\n- Supports schema-qualified names where applicable (e.g., schema.tableName).\n- Used as the target for insert, update, or merge operations during bulk load.\n- Case sensitivity and naming conventions depend on the underlying database system.\n- Determines the scope and context of the bulk load operation within the database.\n- Influences transactional behavior, locking, and concurrency controls during the operation.\n\n**Implementation guidance**\n- Verify the target table exists and the executing user has sufficient permissions for data modification.\n- Ensure the table name respects the case sensitivity and naming rules of the RDBMS.\n- Avoid reserved keywords or special characters unless properly escaped or quoted according to database requirements.\n- Support fully qualified names including schema or database prefixes if required to avoid ambiguity.\n- Validate compatibility between the table schema and the incoming data to prevent type mismatches or constraint violations.\n- Consider transactional behavior, locking, and concurrency implications on the target table during bulk load.\n- Test the bulk load operation in a development or staging environment to confirm correct table targeting.\n- Implement error handling to catch and report issues related to invalid or inaccessible table names.\n\n**Examples**\n- \"customers\"\n- \"sales_data_2024\"\n- \"public.orders\"\n- \"inventory.products\"\n- \"dbo.EmployeeRecords\"\n\n**Important notes**\n- Incorrect or misspelled table names will cause the bulk load operation to fail.\n- The target table schema must be compatible with the incoming data structure to avoid errors.\n- Bulk loading may overwrite, append, or merge data depending on the load mode; ensure this aligns with business requirements.\n- Some databases require quoting or escaping of table names, especially if they"},"primaryKeys":{"type":["string","null"],"description":"primaryKeys specifies the list of column names that uniquely identify each record in the target relational database table during a bulk load operation. These keys are essential for maintaining data integrity by enforcing uniqueness constraints and enabling precise identification of rows for operations such as inserts, updates, upserts, or conflict resolution. Properly defining primaryKeys ensures that duplicate records are detected and handled appropriately, preventing data inconsistencies and supporting efficient data merging processes. This property supports both single-column and composite primary keys, requiring the exact column names as defined in the target schema, and plays a critical role in guiding the bulk load mechanism to correctly match and manipulate records based on their unique identifiers.  \n**Field behavior:**  \n- Defines one or more columns that collectively serve as the unique identifier for each row in the table.  \n- Enforces uniqueness constraints during bulk load operations to prevent duplicate entries.  \n- Facilitates detection and resolution of conflicts or duplicates during data insertion or updating.  \n- Influences the behavior of upsert, merge, or conflict resolution mechanisms in the bulk load process.  \n- Ensures that each record can be reliably matched and updated based on the specified keys.  \n- Supports both single-column and composite keys, maintaining the order of columns as per the database schema.  \n**Implementation guidance:**  \n- Include all columns that form the complete primary key, especially for composite keys, maintaining the correct order as defined in the database schema.  \n- Ensure column names exactly match those in the target database, respecting case sensitivity where applicable.  \n- Validate that all specified primary key columns exist in both the source data and the target table schema before initiating the bulk load.  \n- Avoid leaving the primaryKeys list empty when uniqueness enforcement or conflict resolution is required.  \n- Consider the immutability and stability of primary key columns to prevent inconsistencies during repeated load operations.  \n- Confirm that the primary key columns are indexed or constrained appropriately in the target database to optimize performance.  \n**Examples:**  \n- [\"id\"] — single-column primary key.  \n- [\"order_id\", \"product_id\"] — composite primary key consisting of two columns.  \n- [\"user_id\", \"timestamp\"] — composite key combining user identifier and timestamp for uniqueness.  \n- [\"customer_id\", \"account_number\", \"region\"] — multi-column composite primary key.  \n**Important notes:**  \n- Omitting primaryKeys when required can result in data duplication, failed loads, or incorrect conflict handling.  \n- The order of"},"overrideMergeQuery":{"type":"boolean","description":"A custom SQL query string that fully overrides the default merge operation executed during bulk load processes in a relational database management system (RDBMS). This property empowers users to define precise, tailored merge logic that governs how records are inserted, updated, or deleted in the target database table when handling large-scale data operations. By specifying this query, users can implement complex matching conditions, custom conflict resolution strategies, additional filtering criteria, or conditional logic that surpasses the system-generated default behavior, ensuring the bulk load aligns perfectly with specific business rules, data integrity requirements, or performance optimizations.\n\n**Field behavior**\n- When provided, this query completely replaces the standard merge statement used in bulk load operations.\n- Supports detailed customization of merge logic, including custom join conditions, conditional updates, selective inserts, and optional deletes.\n- If omitted, the system automatically generates and executes a default merge query based on the schema, keys, and data mappings.\n- The query must be syntactically compatible with the target RDBMS and support any required parameterization for dynamic data injection.\n- Executed within the transactional context of the bulk load to maintain atomicity, consistency, and rollback capabilities in case of failure.\n- The system expects the query to handle all necessary merge scenarios to avoid partial or inconsistent data states.\n- Overrides apply globally for the bulk load operation, affecting all records processed in that batch.\n\n**Implementation guidance**\n- Validate the custom SQL syntax thoroughly before execution to prevent runtime errors and ensure compatibility with the target RDBMS.\n- Ensure the query comprehensively addresses all required merge operations—insert, update, and optionally delete—to maintain data integrity.\n- Support parameter placeholders or bind variables if the system injects dynamic values during execution, and document their usage clearly.\n- Provide users with clear documentation, templates, or examples outlining the expected query structure, required clauses, and best practices.\n- Implement robust safeguards against SQL injection and other security vulnerabilities when accepting and executing custom queries.\n- Test the custom query extensively in a controlled staging environment to verify correctness, performance, and side effects before deploying to production.\n- Consider transaction isolation levels and locking behavior to avoid deadlocks or contention during bulk load operations.\n- Encourage users to include comprehensive error handling and logging within the query or surrounding execution context to facilitate troubleshooting.\n\n**Examples**\n- A MERGE statement that updates existing records based on a composite primary key and inserts new records when no match is found.\n- A merge query incorporating additional WHERE clauses to exclude certain"}},"description":"Specifies whether bulk loading is enabled for relational database management system (RDBMS) operations, allowing for the efficient insertion of large volumes of data through a single, optimized operation. Enabling bulkLoad significantly improves performance during data import, migration, or batch processing by minimizing the overhead associated with individual row inserts and leveraging database-specific bulk insert mechanisms. This feature is particularly beneficial for initial data loads, large-scale migrations, or periodic batch updates where speed, resource efficiency, and reduced transaction time are critical. Bulk loading may temporarily alter database behavior—such as disabling indexes, constraints, or triggers—to maximize throughput, and often requires elevated permissions and careful management of transactional integrity to ensure data consistency. Proper use of bulkLoad can lead to substantial reductions in processing time and system resource consumption during large data operations.\n\n**Field behavior**\n- When set to true, the system uses optimized bulk loading techniques to insert data in large batches, greatly enhancing throughput and efficiency.\n- When false or omitted, data insertion defaults to standard row-by-row operations, which may be slower and consume more resources.\n- Typically enabled during scenarios involving initial data population, large batch imports, or data migration processes.\n- May temporarily disable or defer enforcement of indexes, constraints, and triggers to improve performance during the bulk load operation.\n- Can affect database locking and concurrency, potentially locking tables or partitions for the duration of the bulk load.\n- Bulk loading operations may bypass certain transactional controls, affecting rollback and error recovery behavior.\n\n**Implementation guidance**\n- Verify that the target RDBMS supports bulk loading and understand its specific syntax, capabilities, and limitations.\n- Assess transaction management implications, as bulk loading may alter or bypass triggers, constraints, and rollback mechanisms.\n- Implement robust error handling and post-load validation to ensure data integrity and consistency after bulk operations.\n- Monitor system resources such as CPU, memory, and I/O throughput during bulk load to avoid performance bottlenecks or outages.\n- Plan for potential impacts on database availability, locking behavior, and concurrent access during bulk load execution.\n- Ensure data is properly staged and preprocessed to meet the format and requirements of the bulk loading mechanism.\n- Coordinate bulk loading with maintenance windows or low-traffic periods to minimize disruption.\n\n**Examples**\n- `bulkLoad: true` — Enables bulk loading to accelerate insertion of large datasets efficiently.\n- `bulkLoad: false` — Disables bulk loading, performing inserts using standard row-by-row methods.\n- `bulkLoad` omitted — Defaults"},"updateLookupName":{"type":"string","description":"Specifies the exact name of the lookup table or entity within the relational database management system (RDBMS) that is targeted for update operations. This property serves as a precise identifier to determine which lookup data set should be modified, ensuring accurate and efficient targeting within the database schema. It typically corresponds to a physical table name or a logical entity name defined in the database and must strictly align with the existing schema to enable successful updates without errors or unintended side effects. Proper use of this property is critical for maintaining data integrity during update operations on lookup data, as it directly influences which records are affected. Accurate specification helps prevent accidental data corruption and supports clear, maintainable database interactions.\n\n**Field behavior**\n- Defines the specific lookup table or entity to be updated during an operation.\n- Acts as a key identifier for the update process to locate the correct data set.\n- Must be provided when performing update operations on lookup data.\n- Influences the scope and effect of the update by specifying the target entity.\n- Changes to this value directly affect which data is modified in the database.\n- Invalid or missing values will cause update operations to fail or target unintended data.\n\n**Implementation guidance**\n- Ensure the value exactly matches the existing lookup table or entity name in the database schema.\n- Validate against the RDBMS naming conventions, including allowed characters, length restrictions, and case sensitivity.\n- Maintain consistent naming conventions across the application to prevent ambiguity and errors.\n- Account for case sensitivity based on the underlying database system’s configuration and collation settings.\n- Avoid using reserved keywords, special characters, or whitespace that may cause conflicts or syntax errors.\n- Implement validation checks to catch misspellings or invalid names before executing update operations.\n- Incorporate error handling to manage cases where the specified lookup name does not exist or is inaccessible.\n\n**Examples**\n- \"country_codes\"\n- \"user_roles\"\n- \"product_categories\"\n- \"status_lookup\"\n- \"department_list\"\n\n**Important notes**\n- Providing an incorrect or misspelled name will result in failed update operations or runtime errors.\n- This field must not be left empty when an update operation is intended.\n- It is only relevant and required when performing update operations on lookup tables or entities.\n- Changes to this value should be carefully managed to avoid unintended data modifications or corruption.\n- Ensure appropriate permissions exist for updating the specified lookup entity to prevent authorization failures.\n- Consistent use of this property across environments (development, staging, production) is essential"},"updateExtract":{"type":"string","description":"Specifies the comprehensive configuration and parameters for updating an existing data extract within a relational database management system (RDBMS). This property defines how the extract's data is modified, supporting various update strategies such as incremental updates, full refreshes, conditional changes, or merges. It ensures that the extract remains accurate, consistent, and aligned with business rules and data integrity requirements by controlling the scope, method, and transactional behavior of update operations. Additionally, it allows for fine-grained control through filters, timestamps, and criteria to selectively update portions of the extract, while managing concurrency and maintaining data consistency throughout the process.\n\n**Field behavior**\n- Determines the update strategy applied to an existing data extract, including incremental, full refresh, conditional, or merge operations.\n- Controls how new data interacts with existing extract data—whether by overwriting, appending, or merging.\n- Supports selective updates using filters, timestamps, or conditional criteria to target specific subsets of data.\n- Manages transactional integrity to ensure updates are atomic, consistent, isolated, and durable (ACID-compliant).\n- Coordinates update execution to prevent conflicts, data corruption, or partial updates.\n- Enables configuration of error handling, logging, and rollback mechanisms during update processes.\n- Handles concurrency control to avoid race conditions and ensure data consistency in multi-user environments.\n- Allows scheduling and triggering of update operations based on time, events, or external signals.\n\n**Implementation guidance**\n- Ensure update configurations comply with organizational data governance, security, and integrity policies.\n- Validate all input parameters and update conditions to avoid inconsistent or partial data modifications.\n- Implement robust transactional support to allow rollback on failure and maintain data consistency.\n- Incorporate detailed logging and error reporting to facilitate monitoring, auditing, and troubleshooting.\n- Optimize update methods based on data volume, change frequency, and performance requirements, balancing between incremental and full refresh approaches.\n- Tailor update logic to leverage specific capabilities and constraints of the target RDBMS, including locking and concurrency controls.\n- Coordinate update timing and execution with downstream systems and data consumers to minimize disruption.\n- Design update processes to be idempotent where possible to support safe retries and recovery.\n- Consider the impact of update latency on data freshness and downstream analytics.\n\n**Examples**\n- Configuring an incremental update that uses a last-modified timestamp column to append only new or changed records to the extract.\n- Defining a full refresh update that completely replaces the existing extract data with a newly extracted dataset.\n- Setting up"},"ignoreLookupName":{"type":"string","description":"Specifies whether the lookup name should be ignored during relational database management system (RDBMS) operations, such as query generation, data retrieval, or relationship resolution. When enabled, this flag instructs the system to bypass the use of the lookup name, which can alter how relationships, joins, or references are resolved within database queries. This behavior is particularly useful in scenarios where the lookup name is redundant, introduces unnecessary complexity, or causes performance overhead. Additionally, it allows for alternative identification methods to be prioritized over the lookup name, enabling more flexible or optimized query strategies.\n\n**Field behavior**\n- When set to true, the system excludes the lookup name from all relevant database operations, including query construction, join conditions, and filtering criteria.\n- When false or omitted, the lookup name is actively utilized to resolve references, enforce relationships, and optimize data retrieval.\n- Determines whether explicit lookup names override or supplement default naming conventions, schema-based identifiers, or inferred relationships.\n- Affects how related data is fetched, potentially influencing join strategies, query plans, and lookup optimizations.\n- Impacts the generation of SQL or other query languages by controlling the inclusion of lookup name references.\n\n**Implementation guidance**\n- Enable this flag to enhance query performance by skipping unnecessary lookup name resolution when it is known to be non-essential or redundant.\n- Thoroughly evaluate the impact on data integrity and correctness to ensure that ignoring the lookup name does not result in incomplete, inaccurate, or inconsistent query results.\n- Validate all downstream processes, components, and integrations that depend on lookup names to prevent breaking dependencies or causing data inconsistencies.\n- Consider the underlying database schema design, naming conventions, and relationship mappings before enabling this flag to avoid unintended side effects.\n- Integrate this flag within query builders, ORM layers, data access modules, or middleware to conditionally include or exclude lookup names during query generation and execution.\n- Implement comprehensive testing and monitoring to detect any adverse effects on application behavior or data retrieval accuracy when this flag is toggled.\n\n**Examples**\n- `ignoreLookupName: true` — The system bypasses the lookup name, generating queries without referencing it, which may simplify query logic and improve execution speed.\n- `ignoreLookupName: false` — The lookup name is included in query logic, ensuring that relationships and references are resolved using the defined lookup identifiers.\n- Omission of the property defaults to `false`, meaning lookup names are considered and used unless explicitly ignored.\n\n**Important notes**\n- Ign"},"ignoreExtract":{"type":"string","description":"Specifies whether the extraction process from the relational database management system (RDBMS) should be entirely skipped. When set to true, the system bypasses the data extraction step, which is particularly useful in scenarios where data extraction is managed externally, has already been completed, or is unnecessary for the current operation. This flag directly controls whether the extraction engine initiates the retrieval of data from the source database, thereby influencing all subsequent stages that depend on the extracted data, such as transformation, validation, and loading. Proper use of this flag ensures flexibility in workflows by allowing integration with external data pipelines or pre-extracted datasets without redundant extraction efforts.\n\n**Field behavior**\n- Determines if the data extraction phase from the RDBMS is executed or omitted.\n- When true, no data is pulled from the source database, effectively skipping extraction.\n- When false or omitted, the extraction process runs normally, retrieving data as configured.\n- Influences downstream processes such as data transformation, validation, and loading that depend on extracted data.\n- Must be explicitly set to true to skip extraction; otherwise, extraction proceeds by default.\n- Impacts the overall data pipeline flow by potentially altering the availability of fresh data.\n\n**Implementation guidance**\n- Default value should be false to ensure extraction occurs unless intentionally overridden.\n- Validate input strictly as a boolean to prevent misconfiguration.\n- Ensure that skipping extraction does not cause failures or data inconsistencies in subsequent pipeline stages.\n- Use this flag in workflows where extraction is handled outside the current system or when working with pre-extracted datasets.\n- Incorporate checks or safeguards to confirm that necessary data is available from alternative sources when extraction is skipped.\n- Log or notify when extraction is skipped to maintain transparency in data processing workflows.\n- Coordinate with other system components to handle scenarios where extraction is bypassed, ensuring smooth pipeline execution.\n\n**Examples**\n- `ignoreExtract: true` — Extraction step is completely bypassed.\n- `ignoreExtract: false` — Extraction step is performed as usual.\n- Field omitted — Defaults to false, so extraction occurs normally.\n- Used in a pipeline where data is pre-loaded from a file or external system, setting `ignoreExtract: true` to avoid redundant extraction.\n\n**Important notes**\n- Setting this flag to true assumes that the system has access to the required data through other means; otherwise, downstream processes may fail or produce incomplete results.\n- Incorrect use can lead to missing data, causing errors or inconsistencies in the data pipeline."}}},"S3":{"type":"object","description":"Configuration for S3 exports","properties":{"region":{"type":"string","description":"The AWS region where the S3 bucket is physically located, representing the specific geographical area within AWS's global infrastructure that hosts the bucket. This designation directly affects data access latency, availability, redundancy, and compliance with regional data governance, privacy laws, and residency requirements. Selecting the appropriate region is critical for optimizing performance, minimizing costs related to data transfer and storage, and ensuring adherence to legal and organizational policies. The region must be specified using a valid AWS region identifier that accurately corresponds to the bucket's actual location to avoid connectivity issues, authentication failures, and improper routing of API requests. Proper region selection also influences disaster recovery strategies, service availability, and integration with other AWS services, making it a foundational parameter in S3 bucket configuration and management.\n\n**Field behavior**\n- Defines the physical and logical location of the S3 bucket within AWS's global network.\n- Directly impacts data access speed, latency, and throughput.\n- Determines compliance with regional data sovereignty, privacy, and security regulations.\n- Must be a valid AWS region code that matches the bucket’s deployment location.\n- Influences availability zones, fault tolerance, and disaster recovery planning.\n- Affects endpoint URL construction and API request routing.\n- Plays a role in cost optimization by affecting storage pricing and data transfer fees.\n- Governs integration capabilities with other AWS services available in the selected region.\n\n**Implementation guidance**\n- Use official AWS region codes such as \"us-east-1\", \"eu-west-2\", or \"ap-southeast-2\" as defined by AWS.\n- Validate the region value against the current list of supported AWS regions to prevent misconfiguration.\n- Ensure the specified region exactly matches the bucket’s physical location to avoid access and authentication errors.\n- Consider cost implications including storage pricing, data transfer fees, and cross-region replication expenses.\n- When migrating or replicating buckets across regions, update this property accordingly and plan for data transfer and downtime.\n- Verify service availability and feature support in the chosen region before deployment.\n- Regularly review AWS region updates and deprecations to maintain compatibility.\n- Use region-specific endpoints when constructing API requests to ensure proper routing.\n\n**Examples**\n- us-east-1 (Northern Virginia, USA)\n- eu-central-1 (Frankfurt, Germany)\n- ap-southeast-2 (Sydney, Australia)\n- sa-east-1 (São Paulo, Brazil)\n- ca-central-1 (Canada Central)\n\n**Important notes**\n- Incorrect or mismatched region specification"},"bucket":{"type":"string","description":"The name of the Amazon S3 bucket where data or objects are stored or will be stored. This bucket serves as the primary container within the AWS S3 service, uniquely identifying the storage location for your data across all AWS accounts globally. The bucket name must strictly comply with AWS naming conventions, including being DNS-compliant, entirely lowercase, and free of underscores, uppercase letters, or IP address-like formats, ensuring global uniqueness and compatibility. It is essential that the specified bucket already exists and that the user or service has the necessary permissions to access, upload, or modify its contents. Selecting the appropriate bucket, particularly with regard to its AWS region, can significantly impact application performance, cost efficiency, and adherence to data residency and regulatory requirements.\n\n**Field behavior**\n- Specifies the target S3 bucket for all data storage and retrieval operations.\n- Must reference an existing bucket with proper access permissions.\n- Acts as a globally unique identifier within the AWS S3 ecosystem.\n- Immutable after initial assignment; changing the bucket requires specifying a new bucket name.\n- Influences data locality, latency, cost, and compliance based on the bucket’s region.\n\n**Implementation guidance**\n- Validate bucket names against AWS S3 naming rules: lowercase letters, numbers, and hyphens only; no underscores, uppercase letters, or IP address formats.\n- Confirm the bucket’s existence and verify access permissions before performing operations.\n- Consider the bucket’s AWS region to optimize latency, cost, and compliance with data governance policies.\n- Implement comprehensive error handling for scenarios such as bucket not found, access denied, or insufficient permissions.\n- Align bucket selection with organizational policies and regulatory requirements concerning data storage and residency.\n\n**Examples**\n- \"my-app-data-bucket\"\n- \"user-uploads-2024\"\n- \"company-backups-eu-west-1\"\n\n**Important notes**\n- Bucket names are globally unique across all AWS accounts and cannot be reused or renamed once created.\n- Access control is managed through AWS IAM roles, policies, and bucket policies, which must be correctly configured.\n- The bucket’s region selection is critical for optimizing performance, minimizing costs, and ensuring legal compliance.\n- Bucket naming restrictions prohibit uppercase letters, underscores, and IP address-like formats to maintain DNS compliance.\n\n**Dependency chain**\n- Depends on the availability and stability of the AWS S3 service.\n- Requires valid AWS credentials with appropriate permissions to access or modify the bucket.\n- Typically used alongside object keys, region configurations,"},"fileKey":{"type":"string","description":"The unique identifier or path string that specifies the exact location of a file stored within an Amazon S3 bucket. This key acts as the primary reference for locating, accessing, managing, and manipulating a specific file within the storage system. It can be a simple filename or a hierarchical path that mimics folder structures to logically organize files within the bucket. The fileKey is case-sensitive and must be unique within the bucket to prevent conflicts or overwriting. It supports UTF-8 encoded characters and can include delimiters such as slashes (\"/\") to simulate directories, enabling structured file organization. Proper encoding and adherence to AWS naming conventions are essential to ensure seamless integration with S3 APIs and web requests. Additionally, the fileKey plays a critical role in access control, lifecycle policies, and event notifications within the S3 environment.\n\n**Field behavior**\n- Uniquely identifies a file within the S3 bucket namespace.\n- Serves as the primary reference in all file-related operations such as retrieval, update, deletion, and metadata management.\n- Case-sensitive and must be unique to avoid overwriting or conflicts.\n- Supports folder-like delimiters (e.g., slashes) to simulate directory structures for logical organization.\n- Used as a key parameter in all relevant S3 API operations, including GetObject, PutObject, DeleteObject, and CopyObject.\n- Influences access permissions and lifecycle policies when used with prefixes or specific key patterns.\n\n**Implementation guidance**\n- Use a UTF-8 encoded string that accurately reflects the file’s intended path or name.\n- Avoid unsupported or problematic characters; strictly follow S3 key naming conventions to prevent errors.\n- Incorporate logical folder structures using delimiters for better organization and maintainability.\n- Ensure URL encoding when the key is included in web requests or URLs to handle special characters properly.\n- Validate that the key length does not exceed S3’s maximum allowed size (up to 1024 bytes).\n- Consider the impact of key naming on access control policies, lifecycle rules, and event triggers.\n\n**Examples**\n- \"documents/report2024.pdf\"\n- \"images/profile/user123.png\"\n- \"backups/2023/12/backup.zip\"\n- \"videos/2024/events/conference.mp4\"\n- \"logs/2024/06/15/server.log\"\n\n**Important notes**\n- The fileKey is case-sensitive; for example, \"File.txt\" and \"file.txt\" are distinct keys.\n- Changing the fileKey"},"backupBucket":{"type":"string","description":"The name of the Amazon S3 bucket designated for securely storing backup data. This bucket serves as the primary repository for saving copies of critical information, ensuring data durability, integrity, and availability for recovery in the event of data loss, corruption, or disaster. It must be a valid, existing bucket within the AWS environment, configured with appropriate permissions and security settings to facilitate reliable and efficient backup operations. Proper configuration of this bucket—including enabling versioning to preserve historical backup states, applying encryption to protect data at rest, and setting lifecycle policies to manage storage costs and data retention—is essential to optimize backup management, maintain compliance with organizational and regulatory requirements, and control expenses. The bucket should ideally reside in the same AWS region as the backup source to minimize latency and data transfer costs, and may incorporate cross-region replication for enhanced disaster recovery capabilities. Additionally, the bucket should be monitored regularly for access patterns, storage usage, and security compliance to ensure ongoing protection and operational efficiency.\n\n**Field behavior**\n- Specifies the exact S3 bucket where backup data will be written and stored.\n- Must reference a valid, existing bucket within the AWS account and region.\n- Used exclusively during backup processes to save data snapshots or incremental backups.\n- Requires appropriate write permissions to allow backup services to upload data.\n- Typically remains static unless backup storage strategies or requirements change.\n- Supports integration with backup scheduling and monitoring systems.\n- Facilitates data recovery by maintaining backup copies in a secure, durable location.\n\n**Implementation guidance**\n- Verify that the bucket name adheres to AWS S3 naming conventions and is DNS-compliant.\n- Confirm the bucket exists and is accessible by the backup service or user.\n- Set up IAM roles and bucket policies to grant necessary write and read permissions securely.\n- Enable bucket versioning to maintain historical backup versions and support data recovery.\n- Implement lifecycle policies to manage storage costs by archiving or deleting old backups.\n- Apply encryption (e.g., AWS KMS) to protect backup data at rest and meet compliance requirements.\n- Consider enabling cross-region replication for disaster recovery scenarios.\n- Regularly audit bucket permissions and access logs to ensure security compliance.\n- Monitor storage usage and costs to optimize resource allocation and prevent unexpected charges.\n- Align bucket configuration with organizational policies for data retention, security, and compliance.\n\n**Examples**\n- \"my-app-backups\"\n- \"company-data-backup-2024\"\n- \"prod-environment-backup-bucket\"\n- \"s3-backups-region1"},"serverSideEncryptionType":{"type":"string","description":"serverSideEncryptionType specifies the type of server-side encryption applied to objects stored in the S3 bucket, determining how data at rest is securely protected by the storage service. This property defines the encryption algorithm or method used by the server to automatically encrypt data upon storage, ensuring confidentiality, integrity, and compliance with security standards. It accepts predefined values that correspond to supported encryption mechanisms, influencing how data is encrypted, accessed, and managed during storage and retrieval operations. By configuring this property, users can enforce encryption policies that safeguard sensitive information without requiring client-side encryption management, thereby simplifying security administration and enhancing data protection.\n\n**Field behavior**\n- Defines the encryption algorithm or method used by the server to encrypt data at rest.\n- Controls whether server-side encryption is enabled or disabled for stored objects.\n- Accepts specific predefined values corresponding to supported encryption types such as AES256 or aws:kms.\n- Influences data handling during storage and retrieval to maintain data confidentiality and integrity.\n- Applies encryption automatically without requiring client-side encryption management.\n- Determines compliance with organizational and regulatory encryption requirements.\n- Affects how encryption keys are managed, accessed, and audited.\n- Impacts access control and permissions related to encrypted objects.\n- May affect performance and cost depending on the encryption method chosen.\n\n**Implementation guidance**\n- Validate the input value against supported encryption types, primarily \"AES256\" and \"aws:kms\".\n- Ensure the selected encryption type is compatible with the S3 bucket’s configuration, policies, and permissions.\n- When using \"aws:kms\", specify the appropriate AWS KMS key ID if required, and verify key permissions.\n- Gracefully handle scenarios where encryption is not specified, disabled, or set to an empty value.\n- Provide clear and actionable error messages if an unsupported or invalid encryption type is supplied.\n- Consider the impact on performance and cost when enabling encryption, especially with KMS-managed keys.\n- Document encryption settings clearly for users to understand the security posture of stored data.\n- Implement fallback or default behaviors when encryption settings are omitted.\n- Ensure that encryption settings align with audit and compliance reporting requirements.\n- Coordinate with key management policies to maintain secure key lifecycle and rotation.\n- Test encryption settings thoroughly to confirm data is encrypted and accessible as expected.\n\n**Examples**\n- \"AES256\" — Server-side encryption using Amazon S3-managed encryption keys (SSE-S3).\n- \"aws:kms\" — Server-side encryption using AWS Key Management Service-managed keys (SSE-KMS"}}},"Wrapper":{"type":"object","description":"Configuration for Wrapper exports","properties":{"function":{"type":"string","description":"Specifies the exact name of the function to be invoked within the wrapper context, enabling dynamic and flexible execution of different functionalities based on the provided value. This property acts as a critical key identifier that directs the wrapper to call a specific function, facilitating modular, reusable, and maintainable code design. The function name must correspond to a valid, accessible, and callable function within the wrapper's scope, ensuring that the intended operation is performed accurately and efficiently. By allowing the function to be selected at runtime, this mechanism supports dynamic behavior, adaptable workflows, and context-sensitive execution paths, enhancing the overall flexibility and scalability of the system.\n\n**Field behavior**\n- Determines which specific function the wrapper will execute during its operation.\n- Accepts a string value representing the exact name of the target function.\n- Must match a valid and accessible function within the wrapper’s execution context.\n- Influences the wrapper’s behavior and output based on the selected function.\n- Supports dynamic assignment to enable flexible and context-dependent function calls.\n- Enables runtime selection of functionality, allowing for versatile and adaptive processing.\n\n**Implementation guidance**\n- Validate that the provided function name exists and is callable within the wrapper environment before invocation.\n- Ensure the function name is supplied as a string and adheres to the naming conventions used in the wrapper’s codebase.\n- Implement robust error handling to gracefully manage cases where the specified function does not exist, is inaccessible, or fails during execution.\n- Sanitize the input to prevent injection attacks, code injection, or other security vulnerabilities.\n- Allow dynamic updates to this property to support runtime changes in function execution without requiring system restarts.\n- Log function invocation attempts and outcomes for auditing and debugging purposes.\n- Consider implementing a whitelist or registry of allowed function names to enhance security and control.\n\n**Examples**\n- \"initialize\"\n- \"processData\"\n- \"renderOutput\"\n- \"cleanupResources\"\n- \"fetchUserDetails\"\n- \"validateInput\"\n- \"exportReport\"\n- \"authenticateUser\"\n\n**Important notes**\n- The function name is case-sensitive and must precisely match the function identifier in the underlying implementation.\n- Misspelled or incorrect function names will cause invocation errors or runtime failures.\n- This property specifies only the function name; any parameters or arguments for the function should be provided separately.\n- The wrapper context must be properly initialized and configured to recognize and execute the specified function.\n- Changes to this property can significantly alter the behavior of the wrapper, so modifications should be managed carefully"},"configuration":{"type":"object","description":"The `configuration` property defines a comprehensive and centralized set of parameters, settings, and operational directives that govern the behavior, functionality, and characteristics of the wrapper component. It acts as the primary container for all customizable options that influence how the wrapper initializes, runs, and adapts to various deployment environments or user requirements. This includes environment-specific configurations, feature toggles, resource limits, integration endpoints, security credentials, logging preferences, and other critical controls. By encapsulating these diverse settings, the configuration property enables fine-grained control over the wrapper’s runtime behavior, supports dynamic adaptability, and facilitates consistent management across different scenarios.\n\n**Field behavior**\n- Encapsulates all relevant configurable options affecting the wrapper’s operation and lifecycle.\n- Supports complex nested structures or key-value pairs to represent hierarchical and interrelated settings.\n- Can be mandatory or optional depending on the wrapper’s design and operational context.\n- Changes to configuration typically influence initialization, runtime behavior, and potentially require component restart or reinitialization.\n- May support dynamic updates if the wrapper is designed for live reconfiguration without downtime.\n- Acts as a single source of truth for operational parameters, ensuring consistency and predictability.\n\n**Implementation guidance**\n- Organize configuration parameters logically, grouping related settings to improve readability and maintainability.\n- Implement robust validation mechanisms to enforce correct data types, value ranges, and adherence to constraints.\n- Provide sensible default values for optional parameters to enhance usability and prevent misconfiguration.\n- Design the configuration schema to be extensible and backward-compatible, allowing seamless future enhancements.\n- Secure sensitive information within the configuration using encryption, secure storage, or exclusion from logs and error messages.\n- Thoroughly document each configuration option, including its purpose, valid values, default behavior, and impact on the wrapper.\n- Consider environment-specific overrides or profiles to facilitate deployment flexibility.\n- Ensure configuration changes are atomic and consistent to avoid partial or invalid states.\n\n**Examples**\n- `{ \"timeout\": 5000, \"enableLogging\": true, \"maxRetries\": 3 }`\n- `{ \"environment\": \"production\", \"apiEndpoint\": \"https://api.example.com\", \"features\": { \"beta\": false } }`\n- Nested objects specifying database connection details, authentication credentials, UI themes, third-party service integrations, or resource limits.\n- Feature flags enabling or disabling experimental capabilities dynamically.\n- Security parameters such as OAuth tokens, encryption keys, or access control lists.\n\n**Important notes**\n- Comprehensive and clear documentation of all"},"lookups":{"type":"object","properties":{"type":{"type":"string","description":"type: >\n  Specifies the category or classification of the lookup within the wrapper context.\n  This property defines the nature or kind of lookup being performed or referenced.\n  **Field behavior**\n  - Determines the specific type of lookup operation or data classification.\n  - Influences how the lookup data is processed or interpreted.\n  - May restrict or enable certain values or options based on the type selected.\n  **Implementation guidance**\n  - Use clear and consistent naming conventions for different types.\n  - Validate the value against a predefined set of allowed types to ensure data integrity.\n  - Ensure that the type aligns with the corresponding lookup logic or data source.\n  **Examples**\n  - \"userRole\"\n  - \"productCategory\"\n  - \"statusCode\"\n  - \"regionCode\"\n  **Important notes**\n  - The type value is critical for correctly resolving and handling lookup data.\n  - Changing the type may affect downstream processing or data retrieval.\n  - Ensure compatibility with other related fields or components that depend on this type.\n  **Dependency chain**\n  - Depends on the wrapper context to provide scope.\n  - Influences the selection and retrieval of lookup values.\n  - May be linked to validation schemas or business logic modules.\n  **Technical details**\n  - Typically represented as a string.\n  - Should conform to a controlled vocabulary or enumeration where applicable.\n  - Case sensitivity may apply depending on implementation."},"_lookupCacheId":{"type":"string","format":"objectId","description":"_lookupCacheId: Identifier used to reference a specific cache entry for lookup operations within the wrapper's lookup mechanism.\n**Field behavior**\n- Serves as a unique identifier for cached lookup data.\n- Used to retrieve or update cached lookup results efficiently.\n- Helps in minimizing redundant lookup operations by reusing cached data.\n- Typically remains constant for the lifespan of the cached data entry.\n**Implementation guidance**\n- Should be generated to ensure uniqueness within the scope of the wrapper's lookups.\n- Must be consistent and stable to correctly reference the intended cache entry.\n- Should be invalidated or updated when the underlying lookup data changes.\n- Can be a string or numeric identifier depending on the caching system design.\n**Examples**\n- \"userRolesCache_v1\"\n- \"productLookupCache_2024_06\"\n- \"regionCodesCache_42\"\n**Important notes**\n- Proper management of _lookupCacheId is critical to avoid stale or incorrect lookup data.\n- Changing the _lookupCacheId without updating the cache content can lead to lookup failures.\n- Should be used in conjunction with cache expiration or invalidation strategies.\n**Dependency chain**\n- Depends on the wrapper.lookups system to manage cached lookup data.\n- May interact with cache storage or memory management components.\n- Influences lookup operation performance and data consistency.\n**Technical details**\n- Typically implemented as a string identifier.\n- May be used as a key in key-value cache stores.\n- Should be designed to avoid collisions with other cache identifiers.\n- May include versioning or timestamp information to manage cache lifecycle."},"extract":{"type":"string","description":"Extract: Specifies the extraction criteria or pattern used to retrieve specific data from a given input within the lookup process.\n**Field behavior**\n- Defines the rules or patterns to identify and extract relevant information from input data.\n- Can be a string, regular expression, or structured query depending on the implementation.\n- Used during the lookup operation to isolate the desired subset of data.\n- May support multiple extraction patterns or conditional extraction logic.\n**Implementation guidance**\n- Ensure the extraction pattern is precise to avoid incorrect or partial data retrieval.\n- Validate the extraction criteria against sample input data to confirm accuracy.\n- Support common pattern formats such as regex for flexible and powerful extraction.\n- Provide clear error handling if the extraction pattern fails or returns no results.\n- Allow for optional extraction parameters to customize behavior (e.g., case sensitivity).\n**Examples**\n- A regex pattern like `\"\\d{4}-\\d{2}-\\d{2}\"` to extract dates in YYYY-MM-DD format.\n- A JSONPath expression such as `$.store.book[*].author` to extract authors from a JSON object.\n- A simple substring like `\"ERROR:\"` to extract error messages from logs.\n- XPath query to extract nodes from XML data.\n**Important notes**\n- The extraction logic directly impacts the accuracy and relevance of the lookup results.\n- Complex extraction patterns may affect performance; optimize where possible.\n- Extraction should handle edge cases such as missing fields or unexpected data formats gracefully.\n- Document supported extraction pattern types and syntax clearly for users.\n**Dependency chain**\n- Depends on the input data format and structure to define appropriate extraction criteria.\n- Works in conjunction with the lookup mechanism that applies the extraction.\n- May interact with validation or transformation steps post-extraction.\n**Technical details**\n- Typically implemented using pattern matching libraries or query language parsers.\n- May require escaping or encoding special characters within extraction patterns.\n- Should support configurable options like multiline matching or case sensitivity.\n- Extraction results are usually returned as strings, arrays, or structured objects depending on context."},"map":{"type":"object","description":"map: >\n  A mapping object that defines key-value pairs used for lookup operations within the wrapper context.\n  **Field behavior**\n  - Serves as a dictionary or associative array for translating or mapping input keys to corresponding output values.\n  - Used to customize or override default lookup values dynamically.\n  - Supports string keys and values, but may also support other data types depending on implementation.\n  **Implementation guidance**\n  - Ensure keys are unique within the map to avoid conflicts.\n  - Validate that values conform to expected formats or types required by the lookup logic.\n  - Consider immutability or controlled updates to prevent unintended side effects during runtime.\n  - Provide clear error handling for missing or invalid keys during lookup operations.\n  **Examples**\n  - {\"US\": \"United States\", \"CA\": \"Canada\", \"MX\": \"Mexico\"}\n  - {\"error404\": \"Not Found\", \"error500\": \"Internal Server Error\"}\n  - {\"env\": \"production\", \"version\": \"1.2.3\"}\n  **Important notes**\n  - The map is context-specific and should be populated according to the needs of the wrapper's lookup functionality.\n  - Large maps may impact performance; optimize size and access patterns accordingly.\n  - Changes to the map may require reinitialization or refresh of dependent components.\n  **Dependency chain**\n  - Depends on the wrapper.lookups context for proper integration.\n  - May be referenced by other properties or methods performing lookup operations.\n  **Technical details**\n  - Typically implemented as a JSON object or dictionary data structure.\n  - Keys and values are usually strings but can be extended to other serializable types.\n  - Should support efficient retrieval, ideally O(1) time complexity for lookups."},"default":{"type":["object","null"],"description":"The default property specifies the fallback or initial value to be used within the wrapper's lookups configuration when no other specific value is provided or applicable. This ensures that the system has a predefined baseline value to operate with, preventing errors or undefined behavior.\n\n**Field behavior**\n- Acts as the fallback value in the lookup process.\n- Used when no other matching lookup value is found.\n- Provides a baseline or initial value for the wrapper configuration.\n- Ensures consistent behavior by avoiding null or undefined states.\n\n**Implementation guidance**\n- Define a sensible and valid default value that aligns with the expected data type and usage context.\n- Ensure the default value is compatible with other dependent fields or processes.\n- Update the default value cautiously, as it impacts the fallback behavior system-wide.\n- Validate the default value during configuration loading to prevent runtime errors.\n\n**Examples**\n- A string value such as \"N/A\" or \"unknown\" for textual lookups.\n- A numeric value like 0 or -1 for numerical lookups.\n- A boolean value such as false when a true/false default is needed.\n- An object or dictionary representing a default configuration set.\n\n**Important notes**\n- The default value should be meaningful and appropriate to avoid misleading results.\n- Overriding the default value may affect downstream logic relying on fallback behavior.\n- Absence of a default value might lead to errors or unexpected behavior in the lookup process.\n- Consider localization or context-specific defaults if applicable.\n\n**Dependency chain**\n- Used by the wrapper.lookups mechanism during value resolution.\n- May influence or be influenced by other lookup-related properties or configurations.\n- Interacts with error handling or fallback logic in the system.\n\n**Technical details**\n- Data type should match the expected type of lookup values (string, number, boolean, object, etc.).\n- Stored and accessed as part of the wrapper's lookup configuration object.\n- Should be immutable during runtime unless explicitly reconfigured.\n- May be serialized/deserialized as part of configuration files or API payloads."},"allowFailures":{"type":"boolean","description":"allowFailures: Indicates whether the system should permit failures during the lookup process without aborting the entire operation.\n**Field behavior**\n- When set to true, individual lookup failures are tolerated, allowing the process to continue.\n- When set to false, any failure in the lookup process causes the entire operation to fail.\n- Helps in scenarios where partial results are acceptable or expected.\n**Implementation guidance**\n- Use this flag to control error handling behavior in lookup operations.\n- Ensure that downstream processes can handle partial or incomplete data if allowFailures is true.\n- Log or track failures when allowFailures is enabled to aid in debugging and monitoring.\n**Examples**\n- allowFailures: true — The system continues processing even if some lookups fail.\n- allowFailures: false — The system stops and reports an error immediately upon a lookup failure.\n**Important notes**\n- Enabling allowFailures may result in incomplete or partial data sets.\n- Use with caution in critical systems where data integrity is paramount.\n- Consider combining with retry mechanisms or fallback strategies.\n**Dependency chain**\n- Depends on the lookup operation within wrapper.lookups.\n- May affect error handling and result aggregation components.\n**Technical details**\n- Boolean value: true or false.\n- Default behavior should be defined explicitly to avoid ambiguity.\n- Should be checked before executing lookup calls to determine error handling flow."},"_id":{"type":"object","description":"Unique identifier for the lookup entry within the wrapper context.\n**Field behavior**\n- Serves as the primary key to uniquely identify each lookup entry.\n- Immutable once assigned to ensure consistent referencing.\n- Used to retrieve, update, or delete specific lookup entries.\n**Implementation guidance**\n- Should be generated using a globally unique identifier (e.g., UUID or ObjectId).\n- Must be indexed in the database for efficient querying.\n- Should be validated to ensure uniqueness within the wrapper.lookups collection.\n**Examples**\n- \"507f1f77bcf86cd799439011\"\n- \"a1b2c3d4e5f6789012345678\"\n**Important notes**\n- This field is mandatory for every lookup entry.\n- Changing the _id after creation can lead to data inconsistency.\n- Should not contain sensitive or personally identifiable information.\n**Dependency chain**\n- Used by other fields or services that reference lookup entries.\n- May be linked to foreign keys or references in related collections or tables.\n**Technical details**\n- Typically stored as a string or ObjectId type depending on the database.\n- Must conform to the format and length constraints of the chosen identifier scheme.\n- Should be generated server-side to prevent collisions."},"cLocked":{"type":"object","description":"cLocked indicates whether the item is currently locked, preventing modifications or deletions.\n**Field behavior**\n- Represents the lock status of an item within the system.\n- When set to true, the item is considered locked and cannot be edited or deleted.\n- When set to false, the item is unlocked and available for modifications.\n- Typically used to enforce data integrity and prevent concurrent conflicting changes.\n**Implementation guidance**\n- Should be a boolean value: true or false.\n- Ensure that any operation attempting to modify or delete the item checks the cLocked status first.\n- Update the cLocked status appropriately when locking or unlocking the item.\n- Consider integrating with user permissions to control who can change the lock status.\n**Examples**\n- cLocked: true (The item is locked and cannot be changed.)\n- cLocked: false (The item is unlocked and editable.)\n**Important notes**\n- Locking an item does not necessarily mean it is read-only; it depends on the system's enforcement.\n- The lock status should be clearly communicated to users to avoid confusion.\n- Changes to cLocked should be logged for audit purposes.\n**Dependency chain**\n- May depend on user roles or permissions to set or unset the lock.\n- Other fields or operations may check cLocked before proceeding.\n**Technical details**\n- Data type: Boolean.\n- Default value is typically false unless specified otherwise.\n- Stored as part of the lookup or wrapper object in the data model.\n- Should be efficiently queryable to enforce lock checks during operations."}},"description":"A collection of key-value pairs that serve as a centralized reference dictionary to map or translate specific identifiers, codes, or keys into meaningful, human-readable, or contextually relevant values within the wrapper object. This property facilitates data normalization, standardization, and clarity across various components of the application by providing descriptive labels, metadata, or explanatory information associated with each key. It enhances data interpretation, validation, and presentation by enabling consistent and seamless conversion of coded data into understandable formats, supporting both simple and complex hierarchical mappings as needed.\n\n**Field behavior**\n- Functions as a lookup dictionary to resolve codes or keys into descriptive, user-friendly values.\n- Supports consistent data representation and interpretation throughout the application.\n- Commonly accessed during data processing, validation, or UI rendering to enhance readability.\n- May support hierarchical or nested mappings for complex relationships.\n- Typically remains static or changes infrequently but should reflect the latest valid mappings.\n\n**Implementation guidance**\n- Structure as an object, map, or dictionary with unique, well-defined keys and corresponding descriptive values.\n- Ensure keys are consistent, unambiguous, and conform to expected formats or standards.\n- Values should be concise, clear, and informative to facilitate easy understanding by end-users or systems.\n- Support nested or multi-level lookups if the domain requires hierarchical mapping.\n- Validate lookup data integrity regularly to prevent missing, outdated, or incorrect mappings.\n- Consider immutability or concurrency controls if the lookup data is accessed or modified concurrently.\n- Optimize for efficient retrieval, aiming for constant-time (O(1)) lookup performance.\n\n**Examples**\n- Mapping ISO country codes to full country names, e.g., {\"US\": \"United States\", \"FR\": \"France\"}.\n- Translating HTTP status codes to descriptive messages, e.g., {\"200\": \"OK\", \"404\": \"Not Found\"}.\n- Converting product or SKU identifiers to product names within an inventory management system.\n- Mapping error codes to user-friendly error descriptions.\n- Associating role IDs with role names in an access control system.\n\n**Important notes**\n- Keep the lookup collection current to accurately reflect any updates in codes or their meanings.\n- Avoid including sensitive, confidential, or personally identifiable information within lookup values.\n- Monitor and manage the size of the lookup collection to prevent performance degradation.\n- Ensure thread-safety or appropriate synchronization mechanisms if the lookups are mutable and accessed concurrently.\n- Changes to lookup data can have widespread effects on data interpretation and user interface behavior."}}},"Salesforce":{"type":"object","description":"Configuration for Salesforce imports containing operation type, API selection, and object-specific settings.\n\n**IMPORTANT:** This schema defines ONLY the Salesforce-specific configuration properties. Properties like `ignoreExisting`, `ignoreMissing`, `name`, `description` are NOT part of this schema - they belong at a different level.\n\nWhen asked to generate this configuration, return an object with these properties:\n- `operation` (REQUIRED): The Salesforce operation type (insert, update, upsert, etc.)\n- `api` (REQUIRED): The Salesforce API to use (default: \"soap\")\n- `sObjectType`: The Salesforce object type (e.g., \"Account\", \"Contact\", \"Vendor__c\")\n- `idLookup`: Configuration for looking up existing records\n- `upsert`: Configuration for upsert operations (when operation is \"upsert\")\n- And other operation-specific properties as needed","properties":{"lookups":{"type":["string","null"],"description":"A collection of lookup tables or reference data sets that provide standardized, predefined values or options within the application context. These lookup tables enable consistent data entry, validation, and interpretation by defining controlled vocabularies or permissible values for various fields or parameters across the system. They serve as authoritative sources for reference data, ensuring uniformity and reducing errors in data handling and user interactions. Lookup tables typically encompass static or infrequently changing data that supports consistent business logic and user interface behavior across multiple modules or components. By centralizing these reference datasets, the system promotes reusability, simplifies maintenance, and enhances data integrity throughout the application lifecycle.\n\n**Field behavior**\n- Contains multiple named lookup tables, each representing a distinct category or domain of reference data.\n- Used to populate user interface elements such as dropdown menus, autocomplete fields, and selection lists.\n- Supports validation logic by restricting input to predefined permissible values.\n- Typically consists of static or infrequently changing data that underpins consistent data interpretation.\n- May be referenced by multiple components or modules within the application to maintain data consistency.\n- Enables dynamic adaptation of UI and business rules by centralizing reference data management.\n- Facilitates localization by supporting language-specific labels or descriptions where applicable.\n- Often versioned or managed to track changes and ensure backward compatibility.\n\n**Implementation guidance**\n- Structure as a dictionary or map where keys correspond to lookup table names and values are arrays or lists of unique, well-defined entries.\n- Ensure each lookup entry is unique within its table and includes necessary metadata such as codes, descriptions, or display labels.\n- Design for easy updates or extensions to lookup tables without compromising existing data integrity or system stability.\n- Implement caching strategies to optimize performance, especially for client-side consumption.\n- Consider supporting localization and internationalization by including language-specific labels or descriptions where applicable.\n- Incorporate versioning or change management mechanisms to handle updates without disrupting dependent systems.\n- Validate data integrity rigorously during ingestion or updates to prevent duplicates, inconsistencies, or invalid entries.\n- Separate lookup data from business logic to maintain clarity and ease of maintenance.\n- Provide clear documentation for each lookup table to facilitate understanding and correct usage by developers and users.\n- Ensure lookup data is accessible through standardized APIs or interfaces to promote interoperability.\n\n**Examples**\n- A \"countries\" lookup containing standardized country codes (e.g., ISO 3166-1 alpha-2) and their corresponding country names.\n- A \"statusCodes\" lookup defining permissible status values such as \""},"operation":{"type":"string","enum":["insert","update","upsert","upsertpicklistvalues","delete","addupdate"],"description":"Specifies the type of Salesforce operation to perform on records. This determines how data is created, modified, or removed in Salesforce.\n\n**Field behavior**\n- Defines the specific Salesforce action to execute (insert, update, upsert, etc.)\n- Dictates required fields and validation rules for the operation\n- Determines API endpoint and request structure\n- Automatically converted to lowercase before processing\n- Influences response format and error handling\n\n**Operation types**\n\n****insert****\n- **Purpose:** Create new records in Salesforce\n- **Behavior:** Creates new records\n- **Use when:** Creating brand new records\n- **With ignoreExisting: true:** Checks for existing records and skips them (requires idLookup.whereClause)\n- **Without ignoreExisting:** Creates all records without checking for duplicates\n- **Example:** \"Insert new leads from marketing campaign\"\n- **Example with ignoreExisting:** \"Create vendors while ignoring existing vendors\" → operation: \"insert\" + ignoreExisting: true\n\n****update****\n- **Purpose:** Modify existing records in Salesforce\n- **Behavior:** Requires record ID; fails if record doesn't exist\n- **Use when:** Updating known existing records\n- **Example:** \"Update customer status based on order completion\"\n\n****upsert****\n- **Purpose:** Create or update records based on external ID\n- **Behavior:** Updates if external ID matches, creates if not found\n- **Use when:** Syncing data from external systems with external ID field\n- **Example:** \"Upsert customers by External_ID__c field\"\n- **REQUIRES:**\n  - `idLookup.extract` field (maps incoming field to External ID)\n  - `upsert.externalIdField` field (specifies Salesforce External ID field)\n  - Both fields MUST be set for upsert to work\n\n****upsertpicklistvalues****\n- **Purpose:** Create or update picklist values dynamically\n- **Behavior:** Manages picklist value creation/updates\n- **Use when:** Maintaining picklist values from external data\n- **Example:** \"Sync product categories to Salesforce picklist\"\n\n****delete****\n- **Purpose:** Remove records from Salesforce\n- **Behavior:** Requires record ID; records moved to recycle bin\n- **Use when:** Removing obsolete or invalid records\n- **Example:** \"Delete expired opportunities\"\n\n****addupdate****\n- **Purpose:** Create new or update existing records (Celigo-specific)\n- **Behavior:** Similar to upsert but uses different matching logic\n- **Use when:** Upserting without requiring external ID field\n- **Example:** \"Sync contacts, create new or update existing by email\"\n\n**Implementation guidance**\n- Choose operation based on whether records are new, existing, or unknown\n- For **insert**: Ensure records are truly new to avoid errors\n- For **insert with ignoreExisting: true**: MUST set `idLookup.whereClause` to check for existing records\n- For **update/delete**: MUST set `idLookup.whereClause` to identify records\n- For **upsert**: MUST also include the `upsert` object with `externalIdField` property\n- For **upsert**: MUST also set `idLookup.extract` to map incoming field\n- For **addupdate**: MUST set `idLookup.whereClause` to define matching criteria\n- Use **upsertpicklistvalues** only for picklist management scenarios\n\n**Common patterns**\n- **\"Create X while ignoring existing X\"** → operation: \"insert\" + api: \"soap\" + idLookup.whereClause\n- **\"Create or update X\"** → operation: \"upsert\" + api: \"soap\" + upsert.externalIdField + idLookup.extract\n- **\"Update X\"** → operation: \"update\" + api: \"soap\" + idLookup.whereClause\n- **\"Sync X\"** → operation: \"addupdate\" + api: \"soap\" + idLookup.whereClause\n- **ALWAYS include** api field (default to \"soap\" if not specified in prompt)\n\n**Examples**\n\n**Insert new records**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Lead\"\n}\n```\nPrompt: \"Create new leads in Salesforce\"\n\n**Insert with ignore existing**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Vendor__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{VendorName}}'\"\n  }\n}\n```\nPrompt: \"Create vendor records while ignoring existing vendors\"\n\n**NOTE:** For \"ignore existing\" prompts, also remember that `ignoreExisting: true` belongs at a different level (not part of this salesforce configuration).\n\n**Update existing records**\n```json\n{\n  \"operation\": \"update\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Account\",\n  \"idLookup\": {\n    \"whereClause\": \"Id = '{{AccountId}}'\"\n  }\n}\n```\nPrompt: \"Update existing accounts in Salesforce\"\n\n**Upsert by external id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Contact\",\n  \"idLookup\": {\n    \"extract\": \"ExternalId\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nPrompt: \"Create or update contacts by External_ID__c\"\n\n**Addupdate (create or update)**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"api\": \"soap\",\n  \"sObjectType\": \"Customer__c\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{CustomerName}}'\"\n  }\n}\n```\nPrompt: \"Sync customers, create new or update existing\"\n\n**Important notes**\n- Operation value is case-insensitive (automatically converted to lowercase)\n- Invalid operation values will cause API validation errors\n- Each operation has different requirements for record identification\n- **CRITICAL for upsert**: MUST include the `upsert` object with `externalIdField` property\n- **CRITICAL for upsert**: MUST set `idLookup.extract` to map incoming field to External ID\n- **upsert** requires external ID field to be configured in Salesforce\n- **addupdate** is Celigo-specific and may not work with all Salesforce APIs\n- Operations must align with user's Salesforce permissions\n- Some operations support bulk processing more efficiently than others"},"api":{"type":"string","enum":["soap","rest","metadata","compositerecord"],"description":"**REQUIRED FIELD:** Specifies which Salesforce API to use for the import operation. Different APIs have different capabilities, performance characteristics, and use cases.\n\n**CRITICAL:** This field MUST ALWAYS be set. If no specific API is mentioned in the prompt, **DEFAULT to \"soap\"**.\n\n**Field behavior**\n- **REQUIRED** for all Salesforce imports\n- Determines the protocol and endpoint used for Salesforce communication\n- Influences request format, batch sizes, and response handling\n- Automatically converted to lowercase before processing\n- Affects available features and operation types\n- Impacts performance and error handling behavior\n\n**Api types**\n\n****soap** (Default - Most Common)**\n- **Purpose:** SOAP API with additional features like AllOrNone transaction control\n- **Best for:** Transactional imports requiring rollback capabilities\n- **Features:** AllOrNone header, enhanced transaction control, XML-based\n- **Limits:** Similar to REST but with more overhead\n- **Use when:** Need transaction rollback, legacy integrations\n- **Example:** \"Import invoices with all-or-none transaction guarantee\"\n\n        ****rest****\n- **Purpose:** Standard REST API for single-record and small-batch operations\n- **Best for:** Real-time imports, small datasets (< 200 records/call)\n- **Features:** Full CRUD operations, flexible querying, rich error messages\n- **Limits:** 2,000 records per API call\n- **Use when:** Standard import operations, real-time data sync\n- **Example:** \"Import leads from web form submissions\"\n\n****metadata****\n- **Purpose:** Metadata API for importing configuration and setup data\n- **Best for:** Deploying Salesforce configuration, not data records\n- **Features:** Deploy custom objects, fields, workflows, etc.\n- **Use when:** Deploying Salesforce configuration between orgs\n- **Example:** \"Deploy custom object definitions from non-production to production\"\n- **Note:** NOT for data records - only for metadata\n\n****compositerecord****\n- **Purpose:** Composite Graph API for related record trees\n- **Best for:** Creating multiple related records in one request\n- **Features:** Create parent-child relationships atomically\n- **Use when:** Importing hierarchical data (e.g., Account + Contacts + Opportunities)\n- **Example:** \"Import accounts with their related contacts in one operation\"\n\n**Implementation guidance**\n- **MUST ALWAYS SET THIS FIELD** - Never leave it undefined\n- **Default to \"rest\"** for most import operations (90% of use cases)\n- If prompt doesn't specify API type, use **\"soap\"**\n- Use **\"soap\"** only when AllOrNone transaction control is explicitly required\n- Use **\"metadata\"** only for configuration deployment, never for data\n- Use **\"compositerecord\"** for complex parent-child relationship imports\n- Consider API limits and performance characteristics for your data volume\n- Ensure the Salesforce connection supports the chosen API type\n\n**Default choice**\nWhen in doubt or when no API is mentioned in the prompt → **USE \"soap\"**\n\n**Examples**\n\n**Rest api (most common)**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"rest\",\n  \"sobject\": \"Lead\"\n}\n```\nPrompt: \"Import leads into Salesforce\"\n\n**Soap api with transaction control**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"soap\",\n  \"sobject\": \"Invoice__c\",\n  \"soap\": {\n    \"headers\": {\n      \"allOrNone\": true\n    }\n  }\n}\n```\nPrompt: \"Import invoices with all-or-none guarantee\"\n\n**Composite Record api**\n```json\n{\n  \"operation\": \"insert\",\n  \"api\": \"compositerecord\",\n  \"sobject\": \"Account\"\n}\n```\nPrompt: \"Import accounts with related contacts and opportunities\"\n\n**Important notes**\n- API value is case-insensitive (automatically converted to lowercase)\n- **REST API is the default and most commonly used**\n- SOAP API has slightly more overhead but provides transaction guarantees\n- Metadata API is for configuration only, not data records\n- Composite Record API requires careful relationship mapping\n- Each API has different rate limits and performance characteristics\n- Ensure your Salesforce connection and permissions support the chosen API"},"soap":{"type":"object","properties":{"headers":{"type":"object","properties":{"allOrNone":{"type":"boolean","description":"allOrNone specifies whether the Salesforce API operation should be executed atomically, ensuring that either all parts of the operation succeed or none do. When set to true, if any record in a batch operation fails, the entire transaction is rolled back and no changes are committed, preserving data integrity across the dataset. When set to false, the operation allows partial success, meaning some records may be processed and committed even if others fail, which can improve performance but requires careful error handling to manage partial failures. This property is primarily applicable to data manipulation operations such as create, update, delete, and upsert, and is critical for maintaining consistent and reliable data states in multi-record transactions.\n\n**Field behavior**\n- When true, enforces atomicity: all records in the batch must succeed or the entire operation is rolled back with no changes applied.\n- When false, permits partial success: successfully processed records are committed even if some records fail.\n- Applies mainly to batch DML operations including create, update, delete, and upsert.\n- Helps maintain data consistency by preventing partial updates when strict transactional integrity is required.\n- Influences how errors are reported and handled in the API response.\n- Affects transaction commit behavior and rollback mechanisms within the Salesforce platform.\n\n**Implementation guidance**\n- Set to true to guarantee strict data consistency and avoid partial data changes that could lead to inconsistent states.\n- Set to false to allow higher throughput and partial processing when some failures are acceptable or expected.\n- Default value is false if the property is omitted, allowing partial success by default.\n- When false, implement robust error handling to process partial success responses and handle individual record errors appropriately.\n- Verify that the target Salesforce API operation supports the allOrNone header before use.\n- Consider the impact on transaction duration and system resources when enabling atomic operations.\n- Use in scenarios where data integrity is paramount, such as financial or compliance-related data updates.\n\n**Examples**\n- allOrNone = true: Attempting to update 10 records where one record fails validation results in no records being updated, preserving atomicity.\n- allOrNone = false: Attempting to update 10 records where one record fails validation results in 9 records updated successfully and one failure reported.\n- allOrNone = true: Deleting multiple records in a batch will either delete all specified records or none if any deletion fails.\n- allOrNone = false: Creating multiple records where some violate unique constraints results in successful creation of valid records and"}},"description":"Headers to be included in the SOAP request sent to Salesforce, primarily used for authentication, session management, and transmitting additional metadata required by the Salesforce API. These headers form part of the SOAP envelope's header section and are essential for establishing and maintaining a valid session, specifying client options, and enabling various Salesforce API features. They can include standard headers such as SessionHeader for session identification, CallOptions for client-specific settings, and LoginScopeHeader for scoping login requests, as well as custom headers tailored to specific integration needs. Proper configuration of these headers ensures secure and successful communication with Salesforce services, enabling precise control over API interactions and session lifecycle.\n\n**Field behavior**\n- Defines the set of SOAP headers included in every request to the Salesforce API.\n- Facilitates passing of authentication credentials like session IDs or OAuth tokens.\n- Supports inclusion of metadata that controls API behavior or scopes requests.\n- Allows addition of custom headers required by specialized Salesforce integrations.\n- Headers persist across multiple API calls within the same session when set.\n- Modifies request context by influencing how Salesforce processes and authorizes API calls.\n- Enables fine-grained control over API call execution, such as setting query options or client identifiers.\n\n**Implementation guidance**\n- Construct headers as key-value pairs conforming to Salesforce’s SOAP header XML schema.\n- Include mandatory authentication headers such as SessionHeader with valid session IDs.\n- Validate all header values to prevent injection attacks or malformed requests.\n- Use headers to specify client information, API versioning, or organizational context as needed.\n- Ensure header names and structures strictly follow Salesforce SOAP API documentation.\n- Secure sensitive header data to prevent unauthorized access or leakage.\n- Update headers appropriately when session tokens expire or when switching contexts.\n- Test header configurations thoroughly to confirm compatibility with Salesforce API versions.\n- Consider using namespaces and proper XML serialization to maintain SOAP compliance.\n- Handle errors gracefully when headers are missing or invalid to improve debugging.\n\n**Examples**\n- `{ \"SessionHeader\": { \"sessionId\": \"00Dxx0000001gEREAY\" } }` — authenticates the session.\n- `{ \"CallOptions\": { \"client\": \"MyAppClient\" } }` — specifies client application details.\n- `{ \"LoginScopeHeader\": { \"organizationId\": \"00Dxx0000001gEREAY\" } }` — scopes login to a specific organization.\n- `{ \"CustomHeader\": { \"customField\": \"customValue\" } }` — example of a"},"batchSize":{"type":"number","description":"The number of records to be processed in each batch during Salesforce SOAP API operations. This property controls the size of data chunks sent or retrieved in a single API call, optimizing performance and resource utilization by balancing throughput and system resource consumption. Adjusting the batch size directly impacts the number of API calls made, the processing speed, memory usage, and network load, enabling fine-tuning of data operations to suit specific environment constraints and performance goals. Proper configuration of batchSize helps prevent API throttling, reduces the risk of timeouts, and ensures efficient handling of large datasets by segmenting data into manageable portions.\n\n**Field behavior**\n- Determines the count of records included in each batch request to the Salesforce SOAP API.\n- Directly influences the total number of API calls required when handling large datasets.\n- Affects processing throughput, latency, and memory usage during data operations.\n- Can be dynamically adjusted to optimize performance based on system capabilities and network conditions.\n- Impacts error handling and retry mechanisms by controlling batch granularity.\n- Influences network load and the likelihood of encountering API limits or timeouts.\n\n**Implementation guidance**\n- Choose a batch size that adheres to Salesforce API limits and recommended best practices.\n- Balance between larger batch sizes (which reduce API calls but increase memory and processing load) and smaller batch sizes (which increase API calls but reduce resource consumption).\n- Ensure the batchSize value is a positive integer within the allowed range for the specific Salesforce operation.\n- Continuously monitor API response times, error rates, and resource utilization to fine-tune the batch size for optimal performance.\n- Default to Salesforce-recommended batch sizes when uncertain or when starting configuration.\n- Consider network stability and latency when selecting batch size to minimize timeouts and failures.\n- Validate batch size compatibility with other integration components and middleware.\n- Test batch size settings in a staging environment before deploying to production to avoid unexpected failures.\n\n**Examples**\n- `batchSize: 200` — Processes 200 records per batch, a common default for many Salesforce operations.\n- `batchSize: 500` — Processes 500 records per batch, often the maximum allowed for certain Salesforce SOAP API calls.\n- `batchSize: 50` — Processes smaller batches suitable for environments with limited memory or network bandwidth.\n- `batchSize: 100` — A moderate batch size balancing throughput and resource usage in typical scenarios.\n\n**Important notes**\n- Salesforce SOAP API enforces maximum batch size limits (commonly 200"}},"description":"Specifies the comprehensive configuration settings required for integrating with the Salesforce SOAP API. This property includes all essential parameters to establish, authenticate, and manage SOAP-based communication with Salesforce services, enabling a wide range of operations such as creating, updating, deleting, querying, and executing actions on Salesforce objects. It encompasses detailed configurations for endpoint URLs, authentication credentials (such as session IDs or OAuth tokens), SOAP headers, session management details, timeout settings, and error handling mechanisms to ensure reliable, secure, and efficient interactions. The configuration must strictly adhere to Salesforce's WSDL definitions and security standards, supporting robust SOAP API usage within the application environment.\n\n**Field behavior**\n- Defines all connection, authentication, and operational parameters necessary for Salesforce SOAP API interactions.\n- Facilitates the construction and transmission of properly structured SOAP requests and the parsing of SOAP responses.\n- Manages the session lifecycle, including token acquisition, refresh, and timeout handling to maintain persistent connectivity.\n- Supports comprehensive error detection and handling of SOAP faults, API exceptions, and network issues to maintain integration stability.\n- Enables customization of SOAP headers, namespaces, and envelope structures as required by specific Salesforce API calls.\n- Allows configuration of retry policies and backoff strategies in response to transient errors or rate limiting.\n- Supports environment-specific configurations, such as production, non-production, or custom Salesforce domains.\n\n**Implementation guidance**\n- Configure endpoint URLs and authentication credentials accurately based on the Salesforce environment (production, non-production, or custom domains).\n- Validate SOAP envelope structure, namespaces, and compliance with the latest Salesforce WSDL specifications to prevent communication errors.\n- Implement robust error handling to capture, log, and respond appropriately to SOAP faults, API limit exceptions, and network failures.\n- Securely store and transmit sensitive information such as session IDs, OAuth tokens, passwords, and certificates, following best security practices.\n- Monitor Salesforce API usage limits and implement throttling or queuing mechanisms to avoid exceeding quotas and service disruptions.\n- Set timeout values thoughtfully to balance between preventing hanging requests and allowing sufficient time for valid operations to complete.\n- Regularly update the SOAP configuration to align with Salesforce API version changes, WSDL updates, and evolving security requirements.\n- Incorporate logging and monitoring to track SOAP request and response cycles for troubleshooting and performance optimization.\n- Ensure compatibility with Salesforce security protocols, including TLS versions and encryption standards.\n\n**Examples**\n- Specifying the Salesforce SOAP API endpoint URL along with a valid session ID or OAuth token for authentication.\n- Defining custom SOAP headers"},"sObjectType":{"type":"string","description":"sObjectType specifies the exact Salesforce object type that the API operation will target, such as standard objects like Account, Contact, Opportunity, or custom objects like CustomObject__c. This property is essential for defining the schema context of the API request, determining which fields are relevant, and directing the operation to the correct Salesforce data entity. It plays a critical role in data validation, processing logic, and endpoint routing within the Salesforce environment, ensuring that operations such as create, update, delete, or query are executed against the intended object type with precision. Accurate specification of sObjectType enables dynamic adjustment of request payloads and response handling based on the object’s schema and enforces compliance with Salesforce’s API naming conventions and permissions model.  \n**Field behavior:**  \n- Identifies the specific Salesforce object (sObject) type targeted by the API operation.  \n- Determines the applicable schema, field availability, and validation rules based on the selected object.  \n- Must exactly match a valid Salesforce object API name recognized by the connected Salesforce instance, including case sensitivity.  \n- Routes the API request to the appropriate Salesforce object endpoint for the intended operation.  \n- Influences dynamic field mapping, data transformation, and response parsing processes.  \n- Supports both standard and custom Salesforce objects, with custom objects requiring the “__c” suffix.  \n**Implementation guidance:**  \n- Validate sObjectType values against the current Salesforce object metadata in the target environment to prevent invalid requests.  \n- Enforce exact case-sensitive matching with Salesforce API object names (e.g., \"Account\" not \"account\").  \n- Support and recognize both standard objects and custom objects, ensuring custom objects include the “__c” suffix.  \n- Implement robust error handling for invalid, unsupported, or deprecated sObjectType values to provide clear feedback.  \n- Regularly update the list of allowed sObjectType values to reflect changes in the Salesforce schema or environment.  \n- Use sObjectType to dynamically tailor API request payloads, field selections, and response parsing logic.  \n- Consider object-level permissions and authentication scopes when processing requests involving specific sObjectTypes.  \n**Examples:**  \n- \"Account\"  \n- \"Contact\"  \n- \"Opportunity\"  \n- \"Lead\"  \n- \"CustomObject__c\"  \n- \"Case\"  \n- \"Campaign\"  \n**Important notes:**  \n- Custom Salesforce objects must be referenced using their exact API names, including the “__c” suffix.  \n-"},"idLookup":{"type":"object","properties":{"extract":{"type":"string","description":"Please specify which field from your export data should map to External ID. It is very important to pick a field that is both unique, and guaranteed to always have a value when exported (otherwise the Salesforce Upsert operation will fail). If sample data is available then a select list should be displayed here to help you find the right field from your export data. If sample data is not available then you will need to manually type the field id that should map to External ID.\n\n**Field behavior**\n- Specifies which field from the incoming data contains the External ID value\n- **Used for upsert operations ONLY**\n- Maps the incoming data field to the Salesforce External ID field specified in `upsert.externalIdField`\n- Must reference a field that exists in the incoming data payload\n- The field value must be unique and always populated to ensure reliable matching\n- Generates field mapping at runtime to match incoming records with Salesforce records\n\n**When to use**\n- **upsert operations ONLY** — REQUIRED when operation is \"upsert\"\n- Works together with `upsert.externalIdField` to enable External ID matching\n\n**When not to use**\n- **update** — use `whereClause` instead\n- **delete** — use `whereClause` instead\n- **addupdate** — use `whereClause` instead\n- **insert with ignoreExisting** — use `whereClause` instead\n\n**Implementation guidance**\n- Choose a field from incoming data that contains unique, non-null values\n- Ensure the field is always populated in the incoming records to prevent operation failures\n- This field's value will be matched against the Salesforce External ID field specified in `upsert.externalIdField`\n- Use the exact field name as it appears in the incoming data (case-sensitive)\n- If sample data is available, validate that the field exists and has appropriate values\n- Test with sample data to ensure matching works correctly before production deployment\n\n**Example**\n\n**Upsert by External id**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nMaps incoming field \"CustomerNumber\" → Salesforce field \"External_ID__c\"\n\n**How it works:**\n- Incoming record has `{\"CustomerNumber\": \"CUST-12345\", \"Name\": \"John\"}`\n- System extracts \"CUST-12345\" from CustomerNumber\n- Searches Salesforce External_ID__c field for \"CUST-12345\"\n- If found: updates the existing record\n- If not found: creates a new record with External_ID__c = \"CUST-12345\"\n\n**Common field names**\n- \"ExternalId\" — incoming field containing external identifier\n- \"CustomerId\" — incoming field with unique customer number\n- \"AccountNumber\" — incoming field with account number\n- \"SourceSystemId\" — incoming field from source system\n\n**Important notes**\n- **ONLY use for upsert operations**\n- For all other operations (update, delete, addupdate, insert with ignoreExisting), use `whereClause` instead\n- Field must exist in incoming data and be consistently populated\n- Works with `upsert.externalIdField` to enable External ID matching\n- Missing or null values will cause the upsert operation to fail\n- Field name is case-sensitive and must match exactly\n- If both `extract` and `whereClause` are set, `extract` takes precedence for upsert"},"whereClause":{"type":"string","description":"To determine if a record exists in Salesforce, define a SOQL query WHERE clause for the sObject type selected above. For example, if you are importing contact records with a unique email, your WHERE clause should identify contacts with the same email.\n\nHandlebars statements are allowed, and more complex WHERE clauses can contain AND and OR logic. For example, when the product name alone is not unique, you can define a WHERE clause to find any records where the product name exists AND the product also belongs to a specific product family, resulting in a unique product record that matches both criteria.\n\nTo find string values that contain spaces, wrap them in single quotes, for example:\n\nName = 'John Smith'\n\n**Field behavior**\n- Specifies a SOQL conditional expression to match existing records in Salesforce\n- Omit the \"WHERE\" keyword - only provide the condition expression\n- Supports Handlebars {{fieldName}} syntax to reference incoming data fields\n- Supports SOQL operators: =, !=, >, <, >=, <=, LIKE, IN, NOT IN\n- Supports logical operators: AND, OR, NOT\n- Can use parentheses for grouping complex conditions\n- The clause is evaluated for each incoming record to find matching Salesforce records\n\n**When to use (REQUIRED FOR)**\n- **update** — REQUIRED to identify which records to update\n- **delete** — REQUIRED to identify which records to delete\n- **addupdate** — REQUIRED to identify existing records (update if found, insert if not)\n- **insert with ignoreExisting: true** — REQUIRED to check if records exist (skip if found)\n\n**When not to use**\n- **upsert** — DO NOT use; use `extract` instead\n- **insert without ignoreExisting** — Not needed for simple inserts\n\n**Implementation guidance**\n- Write the clause following SOQL syntax rules, omitting the \"WHERE\" keyword\n- Use {{fieldName}} syntax to reference values from incoming records\n- Wrap string values containing spaces in single quotes: `Name = '{{FullName}}'`\n- Numeric and boolean values don't need quotes: `Age = {{Age}}`\n- Use indexed fields (Email, External ID) for better performance\n- Keep the clause selective to minimize records returned\n- Validate syntax before deployment to prevent runtime errors\n\n**Examples**\n\n**Update by Email**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nFinds records with matching Email for update\n\n**Insert with ignore existing by Email**\n```json\n{\n  \"operation\": \"insert\",\n  \"ignoreExisting\": true,\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nChecks if a record with matching Email exists; if found, skips creation\n\n**Addupdate (create or update) by Email**\n```json\n{\n  \"operation\": \"addupdate\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\nUpdates if Email matches, creates new record if not found\n\n**Delete by Account Number**\n```json\n{\n  \"operation\": \"delete\",\n  \"idLookup\": {\n    \"whereClause\": \"AccountNumber = '{{AcctNum}}'\"\n  }\n}\n```\nFinds records with matching AccountNumber for deletion\n\n**Complex and logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{ProductName}}' AND ProductFamily__c = '{{Family}}'\"\n  }\n}\n```\nMatches records where both Name AND ProductFamily match\n\n**Complex or logic**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}' OR Phone = '{{Phone}}'\"\n  }\n}\n```\nMatches records where Email OR Phone matches\n\n**String with spaces**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Name = '{{FullName}}'\"\n  }\n}\n```\nNote: Single quotes around {{FullName}} handle names with spaces like \"John Smith\"\n\n**Important notes**\n- **REQUIRED** for update, delete, addupdate, and insert (with ignoreExisting)\n- DO NOT use for upsert operations - use `extract` instead\n- Use {{fieldName}} to reference incoming data fields\n- String values with spaces must be wrapped in single quotes\n- Numeric and boolean values don't need quotes\n- Poor clause design can impact performance and hit governor limits\n- Always test in a non-production environment before production deployment"}},"description":"Configuration for looking up existing Salesforce records. **This object should ONLY contain either `extract` OR `whereClause` - NOT both, and NOT operation or other properties.**\n\n**What this object contains**\nThis object should contain ONLY ONE of these properties:\n- `extract` — for upsert operations ONLY\n- `whereClause` — for update, delete, addupdate, and insert (with ignoreExisting)\n\n**DO NOT include operation, upsert, or any other properties here** - those belong at the parent salesforce level.\n\n**Field behavior**\n- Contains two strategies for finding existing records: `extract` and `whereClause`\n- **`extract`**: Used ONLY for upsert operations\n- **`whereClause`**: Used for update, delete, addupdate, and insert (with ignoreExisting)\n- Generates field mapping or SOQL queries at runtime to match records\n- Critical for preventing duplicate record creation and enabling record updates\n\n**When to use each field**\n\n**Use `extract` (upsert ONLY)**\n- **upsert** — REQUIRED: Maps incoming field to External ID field\n\n**Use `whereClause` (all other operations)**\n- **update** — REQUIRED: Identifies records to update via SOQL query\n- **delete** — REQUIRED: Identifies records to delete via SOQL query\n- **addupdate** — REQUIRED: Identifies existing records via SOQL query\n- **insert with ignoreExisting: true** — REQUIRED: Checks if records exist via SOQL query\n\n**Leave empty**\n- **insert without ignoreExisting** — No lookup needed for simple inserts\n\n**What to set (idLookup object content)**\n\n**For upsert**\n```json\n{\n  \"extract\": \"CustomerNumber\"\n}\n```\n\n**For update/delete/addupdate**\n```json\n{\n  \"whereClause\": \"Email = '{{Email}}'\"\n}\n```\n\n**Parent context examples (for reference)**\nThese show the complete parent-level structure. **idLookup itself should only contain extract or whereClause.**\n\n**Upsert (parent context)**\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\n\n**Update (parent context)**\n```json\n{\n  \"operation\": \"update\",\n  \"idLookup\": {\n    \"whereClause\": \"Email = '{{Email}}'\"\n  }\n}\n```\n\n**Implementation guidance**\n- Use `extract` ONLY for upsert operations (with `upsert.externalIdField`)\n- Use `whereClause` for update, delete, addupdate, and insert with ignoreExisting\n- Do not set both `extract` and `whereClause` for the same operation\n- **ONLY set extract or whereClause - do not nest operation or other properties**\n- Ensure the lookup strategy matches the operation type\n- Test thoroughly in a non-production environment before production deployment\n\n**Important notes**\n- **This object ONLY contains `extract` or `whereClause`**\n- **`extract`** is for upsert ONLY - maps to External ID field\n- **`whereClause`** is for all other operations requiring lookups\n- Do NOT nest `operation`, `upsert`, or other properties here - those belong at the parent salesforce level\n- For upsert: `extract` + `upsert.externalIdField` work together (both at salesforce level)\n- For other operations: `whereClause` enables SOQL-based record matching\n- Incorrect configuration can cause duplicates or failed operations"},"upsert":{"type":"object","properties":{"externalIdField":{"type":"string","description":"Specifies the API name of the External ID field in Salesforce that will be used to match records during an upsert operation. This is the TARGET field in Salesforce, while `idLookup.extract` specifies the SOURCE field from incoming data.\n\n**Field behavior**\n- Identifies which Salesforce field serves as the external ID for record matching\n- Must be a field marked as \"External ID\" in Salesforce object configuration\n- Works together with `idLookup.extract` to enable upsert matching\n- The field must be indexed and unique in Salesforce\n- Supports text, number, and email field types configured as external IDs\n- Required when `operation: \"upsert\"`\n\n**How it works with idLookup.extract**\n- **`idLookup.extract`**: Specifies which incoming data field contains the matching value (e.g., \"CustomerNumber\")\n- **`externalIdField`**: Specifies which Salesforce field to match against (e.g., \"External_ID__c\")\n- Together they create the match: incoming \"CustomerNumber\" → Salesforce \"External_ID__c\"\n\n**Implementation guidance**\n- Use the exact Salesforce API name of the field (case-sensitive)\n- Verify the field is marked as \"External ID\" in Salesforce object settings\n- Ensure the field is indexed for optimal performance\n- Confirm the integration user has read/write access to this field\n- The field must have unique values to prevent ambiguous matches\n- Test in a non-production environment before production to verify configuration\n\n**Examples**\n\n**Standard External id field**\n```\n\"AccountNumber\"\n```\nStandard Account field marked as External ID\n\n**Custom External id field**\n```\n\"Custom_External_Id__c\"\n```\nCustom field marked as External ID\n\n**Email as External id**\n```\n\"Email\"\n```\nEmail field configured as External ID on Contact\n\n**NetSuite id as External id**\n```\n\"netsuite_conn__NetSuite_Id__c\"\n```\nNetSuite connector External ID field\n\n**How it works (Parent Context)**\nAt the parent `salesforce` level, the complete configuration looks like:\n```json\n{\n  \"operation\": \"upsert\",\n  \"idLookup\": {\n    \"extract\": \"CustomerNumber\"\n  },\n  \"upsert\": {\n    \"externalIdField\": \"External_ID__c\"\n  }\n}\n```\nWhere:\n- `operation: \"upsert\"` — at salesforce level\n- `idLookup.extract: \"CustomerNumber\"` — incoming field, at salesforce level\n- `upsert.externalIdField: \"External_ID__c\"` — **THIS field**, inside upsert object\n\n**Common external id fields**\n- \"AccountNumber\" — standard Account field\n- \"Email\" — when configured as External ID on Contact\n- \"Custom_External_Id__c\" — custom field on any object\n- \"External_ID__c\" — common naming convention\n- \"Source_System_Id__c\" — for multi-system integrations\n\n**Important notes**\n- Field must be explicitly marked as \"External ID\" in Salesforce\n- Not all fields can be External IDs (must be text, number, or email type)\n- Missing or null values in this field will cause upsert to fail\n- If multiple records match the External ID, upsert will fail\n- Always use with `idLookup.extract` for upsert operations"}},"description":"Configuration for Salesforce upsert operations. This object contains ONLY the `externalIdField` property.\n\n**CRITICAL: This component is REQUIRED when `operation: \"upsert\"`. Always include this component for upsert operations.**\n\n**Field behavior**\n- **REQUIRED** when `operation: \"upsert\"`\n- Contains configuration specific to upsert behavior\n- **ONLY contains ONE property: `externalIdField`**\n- Works in conjunction with `idLookup.extract` (which is at the SAME level as this upsert object, NOT nested within it)\n\n**When to use**\n- **REQUIRED** when `operation` is \"upsert\" - MUST be included\n- Leave undefined for insert, update, delete, or addupdate operations\n- Without this object, upsert operations will fail\n\n**Why IT'S required**\n- Upsert operations need to know which Salesforce field to use for matching\n- The `externalIdField` property specifies the Salesforce External ID field\n- This works with `idLookup.extract` to enable record matching\n- Missing this object will cause the import to fail\n\n**Decision**\n**Return true/include this component if `operation: \"upsert\"` is set. This is REQUIRED for all upsert operations.**\n\n**What to set**\n**This object should ONLY contain:**\n```json\n{\n  \"externalIdField\": \"External_ID__c\"\n}\n```\n\n**DO NOT include operation, idLookup, or any other properties here** - those belong at the parent salesforce level.\n\n**Parent context (for reference only)**\nAt the parent `salesforce` level, you will also set:\n- `operation: \"upsert\"` (at salesforce level)\n- `idLookup.extract` (at salesforce level)\n- `upsert.externalIdField` (THIS object)\n\n**Implementation guidance**\n- Always set `externalIdField` when using upsert\n- Ensure the specified External ID field exists in Salesforce\n- Verify the field is marked as \"External ID\" in Salesforce\n- **ONLY set the externalIdField property - nothing else**\n\n**Important notes**\n- **This object ONLY contains `externalIdField`**\n- Do NOT nest `operation`, `idLookup`, or other properties here\n- Those properties belong at the parent salesforce level, not nested in this upsert object\n- May be extended with additional upsert-specific options in the future\n- Only relevant when operation is \"upsert\""},"upsertpicklistvalues":{"type":"object","properties":{"type":{"type":"string","enum":["picklist","multipicklist"],"description":"Specifies the type of picklist field being managed. Salesforce supports single-select picklists and multi-select picklists.\n\n**Field behavior**\n- Determines whether the picklist allows single or multiple value selection\n- Affects UI rendering and validation in Salesforce\n- Automatically converted to lowercase before processing\n\n**Picklist types**\n\n****picklist****\n- Standard single-select picklist\n- User can select only one value at a time\n- Most common picklist type\n\n****multipicklist****\n- Multi-select picklist\n- User can select multiple values (separated by semicolons)\n- Use for categories, tags, or multi-value selections\n\n**Examples**\n- \"picklist\" — for Status, Priority, Category fields\n- \"multipicklist\" — for Tags, Skills, Interests fields"},"fullName":{"type":"string","description":"The API name of the picklist field in Salesforce, including the object name.\n\n**Field behavior**\n- Fully qualified field name: ObjectName.FieldName__c\n- Must match exact Salesforce API name\n- Case-sensitive\n- Required for picklist value upsert operations\n\n**Examples**\n- \"Account.MyPicklist__c\"\n- \"Contact.Status__c\"\n- \"CustomObject__c.Category__c\""},"label":{"type":"string","description":"The display label for the picklist field in Salesforce UI.\n\n**Field behavior**\n- Human-readable label shown in Salesforce interface\n- Can contain spaces and special characters\n- Does not need to match API name\n\n**Examples**\n- \"My Picklist\"\n- \"Product Category\"\n- \"Customer Status\""},"visibleLines":{"type":"number","description":"For multi-select picklists, specifies how many lines are visible in the UI without scrolling.\n\n**Field behavior**\n- Only applies to multipicklist fields\n- Controls height of the picklist selection box\n- Default is typically 4-5 lines\n\n**Examples**\n- 3 — shows 3 values at once\n- 5 — shows 5 values at once\n- 10 — shows more values for large lists"}},"description":"Configuration for managing Salesforce picklist field metadata when using the `upsertpicklistvalues` operation. This enables dynamic creation or updates to picklist field definitions.\n\n**Field behavior**\n- Used ONLY when `operation: \"upsertpicklistvalues\"`\n- Manages picklist field metadata, not picklist values themselves\n- Enables programmatic picklist field management\n- Requires appropriate Salesforce metadata permissions\n\n**When to use**\n- When operation is set to \"upsertpicklistvalues\"\n- For managing picklist field definitions programmatically\n- When syncing picklist configurations between orgs\n- For automated picklist field deployment\n\n**Important notes**\n- This manages the FIELD itself, not the individual picklist VALUES\n- Requires Salesforce API and Metadata API permissions\n- Should only be set when operation is \"upsertpicklistvalues\"\n- Most imports won't use this - it's for metadata management"},"removeNonSubmittableFields":{"type":"boolean","description":"Indicates whether fields that are not eligible for submission to Salesforce—such as read-only, system-generated, formula, or otherwise restricted fields—should be automatically excluded from the data payload before it is sent. This feature helps prevent submission errors by ensuring that only valid, submittable fields are included in the API request, thereby improving data integrity and reducing the likelihood of operation failures. It operates exclusively on the outgoing submission payload without modifying the original source data, making it a safe and effective way to sanitize data before interaction with Salesforce APIs.\n\n**Field behavior**\n- When enabled (set to true), the system analyzes the outgoing data payload and removes any fields identified as non-submittable based on Salesforce metadata, field-level security, and API constraints.\n- When disabled (set to false) or omitted, all fields present in the payload are submitted as-is, which may lead to errors if restricted, read-only, or system-managed fields are included.\n- Enhances data integrity by proactively filtering out fields that could cause rejection or failure during Salesforce data operations.\n- Applies only to the outgoing submission payload, ensuring the original source data remains unchanged and intact.\n- Dynamically adapts to different Salesforce objects, API versions, and user permissions by referencing up-to-date metadata and security settings.\n\n**Implementation guidance**\n- Enable this flag in integration scenarios where the data schema may include fields that Salesforce does not accept for the target object or API version.\n- Maintain an up-to-date cache or real-time access to Salesforce metadata to accurately reflect field-level permissions, submittability, and API changes.\n- Implement logging or reporting mechanisms to track which fields are removed, aiding in auditing, troubleshooting, and transparency.\n- Use in conjunction with other data validation, transformation, and permission-checking steps to ensure clean, compliant data submissions.\n- Thoroughly test in development and staging environments to understand the impact on data flows, error handling, and downstream processes.\n- Consider user roles and permissions, as submittability may vary depending on the authenticated user's access rights and profile settings.\n\n**Examples**\n- `removeNonSubmittableFields: true` — Automatically removes read-only or system-managed fields such as CreatedDate, LastModifiedById, or formula fields before submission.\n- `removeNonSubmittableFields: false` — Submits all fields, potentially causing errors if restricted or read-only fields are included.\n- Omitting the property defaults to false, meaning no automatic removal of non-submittable"},"document":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the document within the Salesforce system, serving as the primary key that distinctly identifies each document record. This identifier is immutable once assigned and must not be altered or reused to maintain data integrity. It is essential for referencing the document in API calls, queries, integrations, and any operations involving the document. The ID is a base-62 encoded string that can be either 15 characters long, which is case-sensitive, or 18 characters long, which includes a case-insensitive checksum to enhance reliability in external systems. Proper handling of this ID ensures accurate linkage and manipulation of document records across various Salesforce processes and integrations.  \n**Field behavior:**  \n- Acts as the primary key uniquely identifying a document record within Salesforce.  \n- Immutable after creation; should never be changed or reassigned.  \n- Used consistently to reference the document across API calls, queries, and integrations.  \n- Case-sensitive and may vary in length (15 or 18 characters).  \n- Serves as a critical reference point for establishing relationships with other records.  \n**Implementation guidance:**  \n- Must be generated exclusively by Salesforce or the managing system to guarantee uniqueness.  \n- Should be handled as a string to accommodate Salesforce’s ID formats.  \n- Validate incoming IDs against Salesforce ID format standards to ensure correctness.  \n- Avoid manual generation or modification of IDs to prevent data inconsistencies.  \n- Use the 18-character format when possible to leverage the checksum for case-insensitive environments.  \n**Examples:**  \n- \"0151t00000ABCDE\"  \n- \"0151t00000ABCDEAAA\"  \n**Important notes:**  \n- IDs are case-sensitive; 15-character IDs are case-sensitive, while 18-character IDs include a case-insensitive checksum.  \n- Do not manually create or alter IDs; always use Salesforce-generated values.  \n- This ID is critical for performing document updates, deletions, retrievals, and establishing relationships.  \n- Using the correct ID format is essential for API compatibility and data integrity.  \n**Dependency chain:**  \n- Required for all operations on salesforce.document entities, including CRUD actions.  \n- Other fields and processes may reference this ID to link related records or trigger actions.  \n- Integral to workflows, triggers, and integrations that depend on document identification.  \n**Technical** DETAILS:  \n- Salesforce record IDs are base-62 encoded strings.  \n- IDs can be 15 characters (case-sensitive) or 18"},"name":{"type":"string","description":"The name of the document as stored in Salesforce, serving as a unique and human-readable identifier within the Salesforce environment. This string value represents the document's title or label, enabling users and systems to easily recognize, retrieve, and manage the document across user interfaces, search functionalities, and API interactions. It is a critical attribute used to reference the document in various operations such as creation, updates, searches, display contexts, and integration workflows, ensuring clear identification and organization within folders or libraries. The name plays a pivotal role in document management by facilitating sorting, filtering, and categorization, and it must adhere to Salesforce’s naming conventions and constraints to maintain system integrity and usability.\n\n**Field behavior**\n- Acts as the primary human-readable identifier for the document.\n- Used in UI elements, search queries, and API calls to locate or reference the document.\n- Typically mandatory when creating or updating document records.\n- Should be unique within the scope of the relevant Salesforce object, folder, or library to avoid ambiguity.\n- Influences sorting, filtering, and organization of documents within lists or folders.\n- Changes to the name may trigger updates or validations in dependent systems or references.\n- Supports international characters to accommodate global users.\n\n**Implementation guidance**\n- Choose descriptive yet concise names to enhance clarity, usability, and discoverability.\n- Validate input to exclude unsupported special characters and ensure compliance with Salesforce naming rules.\n- Enforce uniqueness constraints within the relevant context to prevent conflicts and confusion.\n- Consider the impact of renaming documents on existing references, hyperlinks, integrations, and audit trails.\n- Account for potential length restrictions and implement truncation or user prompts as necessary.\n- Support internationalization by allowing UTF-8 encoded characters where appropriate.\n- Implement validation to prevent use of reserved words or prohibited characters.\n\n**Examples**\n- \"Quarterly Sales Report Q1 2024\"\n- \"Employee Handbook\"\n- \"Project Plan - Alpha Release\"\n- \"Marketing Strategy Overview\"\n- \"Client Contract - Acme Corp\"\n\n**Important notes**\n- The maximum length of the name may be limited (commonly up to 255 characters), depending on Salesforce configuration.\n- Renaming a document can affect external references, hyperlinks, or integrations relying on the original name.\n- Case sensitivity in name comparisons may vary based on Salesforce settings and should be considered when implementing logic.\n- Special characters and whitespace handling should align with Salesforce’s accepted standards to avoid errors.\n- Avoid using reserved words or prohibited characters to ensure compatibility and"},"folderId":{"type":"string","description":"The unique identifier of the folder within Salesforce where the document is stored. This ID is essential for organizing, categorizing, and managing documents efficiently within the Salesforce environment. It enables precise association of documents to specific folders, facilitating streamlined retrieval, access control, and hierarchical organization of document assets. By linking a document to a folder, it inherits the folder’s permissions and visibility settings, ensuring consistent access management and supporting structured document workflows. Proper use of this identifier allows for effective filtering, querying, and management of documents based on their folder location, which is critical for maintaining an organized and secure document repository.\n\n**Field behavior**\n- Identifies the exact folder containing the document.\n- Associates the document with a designated folder for structured organization.\n- Must reference a valid, existing folder ID within Salesforce.\n- Typically represented as a string conforming to Salesforce ID formats (15- or 18-character).\n- Influences document visibility and access based on folder permissions.\n- Changing the folderId moves the document to a different folder, affecting its context and access.\n- Inherits folder-level sharing and security settings automatically.\n\n**Implementation guidance**\n- Verify that the folderId exists and is accessible in the Salesforce instance before assignment.\n- Validate the folderId format to ensure it matches Salesforce’s 15- or 18-character ID standards.\n- Use this field to filter, query, or categorize documents by their folder location in API operations.\n- When creating or updating documents, setting this field assigns or moves the document to the specified folder.\n- Handle permission checks to ensure the user or integration has rights to assign or access the folder.\n- Consider the impact on sharing rules, workflows, and automation triggered by folder changes.\n- Ensure case sensitivity is preserved when handling folder IDs.\n\n**Examples**\n- \"00l1t000003XyzA\" (example of a Salesforce folder ID)\n- \"00l5g000004AbcD\"\n- \"00l9m00000FghIj\"\n\n**Important notes**\n- The folderId must be valid and the folder must be accessible by the user or integration performing the operation.\n- Using an incorrect or non-existent folderId will cause errors or result in improper document categorization.\n- Folder-level permissions in Salesforce can restrict the ability to assign documents or access folder contents.\n- Changes to folderId may affect document sharing, visibility settings, and trigger related workflows.\n- Folder IDs are case-sensitive and must be handled accordingly.\n- Moving documents between folders"},"contentType":{"type":"string","description":"The MIME type of the document, specifying the exact format of the file content stored within the Salesforce document. This field enables systems and applications to accurately identify, process, render, or handle the document by indicating its media type, such as PDF, image, audio, or text formats. It adheres to standard MIME type conventions (type/subtype) and may include additional parameters like character encoding when necessary. Properly setting this field ensures seamless integration, correct display, appropriate application usage, and reliable content negotiation across diverse platforms and services. It plays a critical role in workflows, security scanning, compliance validation, and API interactions by guiding how the document is stored, transmitted, and interpreted.\n\n**Field behavior**\n- Defines the media type of the document content to guide processing, rendering, and handling.\n- Identifies the specific file format (e.g., PDF, JPEG, DOCX) for compatibility and validation purposes.\n- Facilitates content negotiation and appropriate response handling between systems and applications.\n- Typically formatted as a standard MIME type string (type/subtype), optionally including parameters such as charset.\n- Influences document handling in workflows, security scans, user interfaces, and API interactions.\n- Serves as a key attribute for determining how the document is stored, transmitted, and displayed.\n\n**Implementation guidance**\n- Always use valid MIME types conforming to official standards (e.g., \"application/pdf\", \"image/png\") to ensure broad compatibility.\n- Verify that the contentType accurately reflects the actual file content to prevent processing errors or misinterpretation.\n- Update the contentType promptly if the document format changes to maintain consistency and reliability.\n- Utilize this field to enable correct content handling in APIs, integrations, client applications, and automated workflows.\n- Include relevant parameters (such as charset for text-based documents) when applicable to provide additional context.\n- Consider validating the contentType against the file extension and content during upload or processing to enhance data integrity.\n\n**Examples**\n- \"application/pdf\" for PDF documents.\n- \"image/jpeg\" for JPEG image files.\n- \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" for Microsoft Word DOCX files.\n- \"text/plain; charset=utf-8\" for plain text files with UTF-8 encoding.\n- \"application/json\" for JSON formatted documents.\n- \"audio/mpeg\" for MP3 audio files.\n\n**Important notes**\n- Incorrect or mismatched contentType values can cause improper document rendering, processing failures,"},"developerName":{"type":"string","description":"developerName is a unique, developer-friendly identifier assigned to the document within the Salesforce environment. It acts as a stable and consistent reference used primarily in programmatic contexts such as Apex code, API calls, metadata configurations, deployment scripts, and integration workflows. This identifier ensures reliable access, manipulation, and management of the document across various Salesforce components and external systems. The value should be concise, descriptive, and strictly adhere to Salesforce naming conventions to prevent conflicts and ensure maintainability. Typically, it employs camelCase or underscores, excludes spaces and special characters, and is designed to remain unchanged over time to avoid breaking dependencies or integrations.\n\n**Field behavior**\n- Serves as a unique and immutable identifier for the document within the Salesforce org.\n- Widely used in code, APIs, metadata, deployment processes, and integrations to reference the document reliably.\n- Should remain stable and not be altered frequently to maintain integration and system stability.\n- Enforces naming conventions that disallow spaces and special characters, favoring camelCase or underscores.\n- Case-insensitive in many contexts but consistent casing is recommended for clarity and maintainability.\n- Acts as a key reference point in metadata relationships and dependency mappings.\n\n**Implementation guidance**\n- Ensure the developerName is unique within the Salesforce environment to prevent naming collisions.\n- Follow Salesforce naming rules: start with a letter, use only alphanumeric characters and underscores.\n- Avoid reserved keywords, spaces, special characters, and punctuation marks.\n- Keep the identifier concise yet descriptive enough to clearly convey the document’s purpose or function.\n- Validate the developerName against Salesforce’s character set, length restrictions, and naming policies before deployment.\n- Plan for the developerName to remain stable post-deployment to avoid breaking references and integrations.\n- Use consistent casing conventions (camelCase or underscores) to improve readability and reduce errors.\n- Incorporate meaningful terms that reflect the document’s role or content for easier identification.\n\n**Examples**\n- \"InvoiceTemplate\"\n- \"CustomerAgreementDoc\"\n- \"SalesReport2024\"\n- \"ProductCatalog_v2\"\n- \"EmployeeOnboardingGuide\"\n\n**Important notes**\n- Changing the developerName after deployment can disrupt integrations, metadata references, and dependent components.\n- It is distinct from the document’s display name, which can include spaces and be more user-friendly.\n- While Salesforce treats developerName as case-insensitive in many scenarios, maintaining consistent casing improves readability and maintenance.\n- Commonly referenced in metadata APIs, deployment tools, automation scripts, and"},"isInternalUseOnly":{"type":"boolean","description":"isInternalUseOnly indicates whether the document is intended exclusively for internal use within the organization and must not be shared with external parties. This flag is critical for protecting sensitive, proprietary, or confidential information by restricting access strictly to authorized internal users. It serves as a definitive marker within the document’s metadata to enforce internal visibility policies, prevent accidental or unauthorized external distribution, and support compliance with organizational security standards. Proper handling of this flag ensures that internal communications, strategic plans, financial data, and other sensitive materials remain protected from exposure outside the company.\n\n**Field behavior**\n- When set to true, the document is strictly restricted to internal users and must not be accessible or shared with any external entities.\n- When false or omitted, the document may be accessible to external users, subject to other access control mechanisms.\n- Commonly used to flag documents containing sensitive business strategies, internal communications, proprietary data, or confidential policies.\n- Influences user interface elements and API responses to hide, restrict, or clearly label document visibility accordingly.\n- Changes to this flag can trigger notifications, workflow actions, or audit events to ensure proper handling and compliance.\n- May affect document indexing, search results, and export capabilities to prevent unauthorized dissemination.\n\n**Implementation guidance**\n- Always verify this flag before granting document access in both user interfaces and backend APIs to ensure compliance with internal use policies.\n- Use in conjunction with role-based access controls, group memberships, document classification labels, and data sensitivity tags to enforce comprehensive security.\n- Carefully manage updates to this flag to prevent inadvertent exposure of confidential information; consider implementing approval workflows and multi-level reviews for changes.\n- Implement audit logging for all changes to this flag to support compliance, traceability, and forensic investigations.\n- Ensure all integrated systems, third-party applications, and data export mechanisms respect this flag to maintain consistent access restrictions.\n- Provide clear UI indicators, warnings, or access restrictions to users when accessing documents marked as internal use only.\n- Regularly review and validate the accuracy of this flag to maintain up-to-date security postures.\n\n**Examples**\n- true: A confidential internal report on upcoming product launches accessible only to employees.\n- false: A publicly available user manual or product datasheet intended for customers and partners.\n- true: Internal HR policies and procedures documents restricted to company staff.\n- false: Press releases or marketing materials distributed externally.\n- true: Financial forecasts and budget plans shared exclusively within the finance department.\n- true: Internal audit findings and compliance reports"},"isPublic":{"type":"boolean","description":"Indicates whether the document is publicly accessible to all users or restricted exclusively to authorized users within the system. This property governs the document's visibility and access permissions, determining if authentication is required to view its content. When set to true, the document becomes openly available without any access controls, allowing anyone—including unauthenticated users—to access it freely. Conversely, setting this flag to false enforces security measures that limit access based on user roles, permissions, and organizational sharing rules, ensuring that only authorized personnel can view the document. This setting directly impacts how the document appears in search results, sharing interfaces, and external indexing services, making it a critical factor in managing data privacy, compliance, and information governance.\n\n**Field behavior**\n- When true, the document is accessible by anyone, including unauthenticated users.\n- When false, access is restricted to users with explicit permissions or roles.\n- Directly influences the document’s visibility in search results, sharing interfaces, and external indexing.\n- Changes to this property take immediate effect, altering who can view the document.\n- Modifications may trigger updates in access control enforcement mechanisms.\n\n**Implementation guidance**\n- Use this property to clearly define and enforce document sharing and visibility policies.\n- Always set to false for sensitive, confidential, or proprietary documents to prevent unauthorized access.\n- Implement strict validation to ensure only users with appropriate administrative rights can modify this property.\n- Consider organizational compliance requirements, privacy regulations, and data security standards before making documents public.\n- Regularly monitor and audit changes to this property to maintain security oversight and compliance.\n- Plan changes carefully, as toggling this setting can have immediate and broad impact on document accessibility.\n\n**Examples**\n- true: A publicly available product catalog, marketing brochure, or press release intended for external audiences.\n- false: Internal project plans, financial reports, HR documents, or any content restricted to company personnel.\n\n**Important notes**\n- Making a document public may expose it to indexing by search engines if accessible via public URLs.\n- Changes to this property have immediate effect on document accessibility; ensure appropriate change management.\n- Ensure alignment with corporate governance, legal requirements, and data protection policies when toggling public access.\n- Public documents should be reviewed periodically to confirm that their visibility remains appropriate.\n- Unauthorized changes to this property can lead to data leaks or compliance violations.\n\n**Dependency chain**\n- Access control depends on user roles, profiles, and permission sets configured within the system.\n- Interacts"}},"description":"The 'document' property represents a digital file or record associated with a Salesforce entity, encompassing a wide range of file types such as attachments, reports, images, contracts, spreadsheets, and other files stored within the Salesforce platform. It acts as a comprehensive container that includes both the file's binary content—typically encoded in base64—and its associated metadata, such as file name, description, MIME type, and tags. This property enables seamless integration, retrieval, upload, update, and management of files within Salesforce-related workflows and processes. By linking relevant documents directly to Salesforce records, it enhances data organization, accessibility, collaboration, and compliance across various business functions. Additionally, it supports versioning, audit trails, and can trigger automation processes like workflows, validation rules, and triggers, ensuring robust document lifecycle management within the Salesforce ecosystem.\n\n**Field behavior**\n- Contains both the binary content (usually base64-encoded) and descriptive metadata of a file linked to a Salesforce record.\n- Supports a diverse array of file types including PDFs, images, reports, contracts, spreadsheets, and other common document formats.\n- Facilitates operations such as uploading new files, retrieving existing files, updating metadata, deleting documents, and managing versions within Salesforce.\n- May be required or optional depending on the specific API operation, object context, or organizational business rules.\n- Changes to the document content or metadata can trigger Salesforce workflows, triggers, validation rules, or automation processes.\n- Maintains versioning and audit trails when integrated with Salesforce Content or Files features.\n- Respects Salesforce security and sharing settings to control access and protect sensitive information.\n\n**Implementation guidance**\n- Encode the document content in base64 format when transmitting via API to ensure data integrity and compatibility.\n- Validate file size and type against Salesforce platform limits and organizational policies before upload to prevent errors.\n- Provide comprehensive metadata including file name, description, content type (MIME type), file extension, and relevant tags or categories to facilitate search, classification, and governance.\n- Manage user permissions and access controls in accordance with Salesforce security and sharing settings to safeguard sensitive data.\n- Use appropriate Salesforce API endpoints (e.g., /sobjects/Document, Attachment, ContentVersion) and HTTP methods aligned with the document type and intended operation.\n- Implement robust error handling to manage responses related to file size limits, unsupported formats, permission denials, or network issues.\n- Leverage Salesforce Content or Files features for enhanced document management capabilities such as version control, sharing,"},"attachment":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for the attachment within the Salesforce system, serving as the immutable primary key for the attachment object. This ID is essential for accurately referencing, retrieving, updating, or deleting the specific attachment record through API calls. It must conform to Salesforce's standardized ID format, which can be either a 15-character case-sensitive or an 18-character case-insensitive alphanumeric string, with the 18-character version preferred for integrations due to its case insensitivity and reduced risk of errors. While the ID itself does not contain metadata about the attachment's content, it is crucial for linking the attachment to related Salesforce records and ensuring precise operations on the attachment resource across the platform.\n\n**Field behavior**\n- Acts as the unique and immutable primary key for the attachment record.\n- Required for all API operations involving the attachment, including retrieval, updates, and deletions.\n- Ensures consistent and precise identification of the attachment in all Salesforce API interactions.\n- Remains constant and unchangeable throughout the lifecycle of the attachment once created.\n\n**Implementation guidance**\n- Must strictly adhere to Salesforce ID formats: either 15-character case-sensitive or 18-character case-insensitive alphanumeric strings.\n- Should be dynamically obtained from Salesforce API responses when creating or querying attachments to ensure accuracy.\n- Avoid hardcoding or manually entering IDs to prevent data integrity issues and API errors.\n- Validate the ID format programmatically before use to ensure compatibility with Salesforce API requirements.\n- Prefer using the 18-character ID in integration scenarios to minimize case sensitivity issues.\n\n**Examples**\n- \"00P1t00000XyzAbCDE\"\n- \"00P1t00000XyzAbCDEAAA\"\n- \"00P3t00000LmNOpQR1\"\n\n**Important notes**\n- The 15-character ID is case-sensitive, whereas the 18-character ID is case-insensitive and recommended for integration scenarios.\n- Using an incorrect, malformed, or outdated ID will result in API errors or failed operations.\n- The ID does not convey any descriptive information about the attachment’s content or metadata.\n- Always retrieve and use the ID directly from Salesforce API responses to ensure validity and prevent errors.\n- The ID is unique within the Salesforce org and cannot be reused or duplicated.\n\n**Dependency chain**\n- Generated automatically by Salesforce upon creation of the attachment record.\n- Utilized by other API calls and properties that require precise attachment identification.\n- Linked to parent or related Salesforce objects, integrating the attachment"},"name":{"type":"string","description":"The name property represents the filename of the attachment within Salesforce and serves as the primary identifier for the attachment. It is crucial for enabling easy recognition, retrieval, and management of attachments both in the user interface and through API operations. This property typically includes the file extension, which specifies the file type and ensures proper handling, display, and processing of the file across different systems and platforms. The filename should be meaningful, descriptive, and concise to provide clarity and ease of use, especially when managing multiple attachments linked to a single parent record. Adhering to proper naming conventions helps maintain organization, improves searchability, and prevents conflicts or ambiguities that could arise from duplicate or unclear filenames.\n\n**Field behavior**\n- Stores the filename of the attachment as a string value.\n- Acts as the key identifier for the attachment within the Salesforce environment.\n- Displayed prominently in the Salesforce UI and API responses when viewing or managing attachments.\n- Should be unique within the context of the parent record to avoid confusion.\n- Includes file extensions (e.g., .pdf, .jpg, .docx) to denote the file type and facilitate correct processing.\n- Case sensitivity may apply depending on the context, affecting retrieval and display.\n- Used in URL generation and API endpoints, impacting accessibility and referencing.\n\n**Implementation guidance**\n- Validate filenames to exclude prohibited characters (such as \\ / : * ? \" < > |) and other special characters that could cause errors or security vulnerabilities.\n- Ensure the filename length does not exceed Salesforce’s maximum limit, typically 255 characters.\n- Always include the appropriate and accurate file extension to enable correct file type recognition.\n- Use clear, descriptive, and consistent naming conventions that reflect the content or purpose of the attachment.\n- Perform validation checks before upload to prevent failures or inconsistencies.\n- Avoid renaming attachments post-upload to maintain integrity of references and links.\n- Consider localization and encoding standards to support international characters where applicable.\n\n**Examples**\n- \"contract_agreement.pdf\"\n- \"profile_picture.jpg\"\n- \"sales_report_q1_2024.xlsx\"\n- \"meeting_notes.txt\"\n- \"project_plan_v2.docx\"\n\n**Important notes**\n- Filenames are case-sensitive in certain contexts, which can impact retrieval and display.\n- Avoid special characters that may interfere with URLs, file systems, or API processing.\n- Renaming an attachment after upload can disrupt existing references or links to the file.\n- Consistency between the filename and the actual content enhances user"},"parentId":{"type":"string","description":"The unique identifier of the parent Salesforce record to which the attachment is linked. This field establishes a direct and essential relationship between the attachment and its parent object, such as an Account, Contact, Opportunity, or any other Salesforce record type that supports attachments. It ensures that attachments are correctly associated within the Salesforce data model, enabling accurate retrieval, management, and organization of attachments in relation to their parent records. By linking attachments to their parent records, this field facilitates data integrity, efficient querying, and seamless navigation between related records. It is a mandatory field when creating or updating attachments and must reference a valid, existing Salesforce record ID. Proper use of this field guarantees referential integrity, enforces access controls based on parent record permissions, and impacts the visibility and lifecycle of the attachment within the Salesforce environment.\n\n**Field behavior**\n- Represents the Salesforce record ID that the attachment is associated with.\n- Mandatory when creating or updating an attachment to specify its parent record.\n- Must correspond to an existing Salesforce record ID of a valid object type that supports attachments.\n- Enables querying, filtering, and reporting of attachments based on their parent record.\n- Enforces referential integrity between attachments and parent records.\n- Changes to the parent record, including deletion or modification, can affect the visibility, accessibility, and lifecycle of the attachment.\n- Access to the attachment is governed by the permissions and sharing settings of the parent record.\n\n**Implementation guidance**\n- Ensure the parentId is a valid 15- or 18-character Salesforce record ID, with preference for 18-character IDs to avoid case sensitivity issues.\n- Verify the existence and active status of the parent record before assigning the parentId.\n- Confirm that the parent object type supports attachments to prevent errors during attachment operations.\n- Handle user permissions and sharing settings to ensure appropriate access rights to the parent record and its attachments.\n- Use Salesforce APIs or metadata services to validate the parentId format, object type, and existence.\n- Implement robust error handling for invalid, non-existent, or unauthorized parentId values during attachment creation or updates.\n- Consider the impact of parent record lifecycle events (such as deletion or archiving) on associated attachments.\n\n**Examples**\n- \"0011a000003XYZ123\" (Account record ID)\n- \"0031a000004ABC456\" (Contact record ID)\n- \"0061a000005DEF789\" (Opportunity record ID)\n\n**Important notes**\n- The parentId is critical for maintaining data integrity and ensuring"},"contentType":{"type":"string","description":"The MIME type of the attachment content, specifying the exact format and nature of the file (e.g., \"image/png\", \"application/pdf\", \"text/plain\"). This information is critical for correctly interpreting, processing, and displaying the attachment across different systems and platforms. It enables clients and servers to determine how to handle the file, whether to render it inline, prompt for download, or apply specific processing rules. Accurate contentType values facilitate validation, security scanning, and compatibility checks, ensuring that the attachment is managed appropriately throughout its lifecycle. Properly defined contentType values improve interoperability, user experience, and system security by enabling precise content handling and reducing the risk of errors or vulnerabilities.\n\n**Field behavior**\n- Defines the media type of the attachment to guide processing, rendering, and handling decisions.\n- Used by clients, servers, and intermediaries to determine appropriate actions such as inline display, download prompts, or specialized processing.\n- Supports validation of file format during upload, storage, and download operations to ensure data integrity.\n- Influences security measures including filtering, scanning for malware, and enforcing content policies.\n- Affects user interface elements like preview generation, icon selection, and content categorization.\n- May be used in logging, auditing, and analytics to track content types handled by the system.\n\n**Implementation guidance**\n- Must strictly adhere to the standard MIME type format as defined by IANA (e.g., \"type/subtype\").\n- Should accurately reflect the actual content format to prevent misinterpretation or processing errors.\n- Use generic types such as \"application/octet-stream\" only when the specific type cannot be determined.\n- Include optional parameters (e.g., charset, boundary) when relevant to fully describe the content.\n- Ensure the contentType is set or updated whenever the attachment content changes to maintain consistency.\n- Validate the contentType against the file extension and actual content where feasible to enhance reliability.\n- Consider normalizing the contentType to lowercase to maintain consistency across systems.\n- Avoid relying solely on contentType for security decisions; combine with other metadata and content inspection.\n\n**Examples**\n- \"image/jpeg\"\n- \"application/pdf\"\n- \"text/plain; charset=utf-8\"\n- \"application/vnd.ms-excel\"\n- \"audio/mpeg\"\n- \"video/mp4\"\n- \"application/json\"\n- \"multipart/form-data; boundary=something\"\n- \"application/zip\"\n\n**Important notes**\n- An incorrect, missing, or misleading contentType can lead to improper"},"isPrivate":{"type":"boolean","description":"Indicates whether the attachment is designated as private, restricting its visibility exclusively to users who have been explicitly authorized to access it. This setting is critical for maintaining confidentiality and ensuring that sensitive or proprietary information contained within attachments is accessible only to appropriate personnel. When marked as private, the attachment’s visibility is constrained based on the user’s permissions and the sharing settings of the parent record, thereby reinforcing data security within the Salesforce environment. This field directly controls the scope of access to the attachment, dynamically influencing who can view or interact with it, and must be managed carefully to align with organizational privacy policies and compliance requirements.\n\n**Field behavior**\n- When set to true, the attachment is accessible only to users with explicit authorization, effectively hiding it from unauthorized users.\n- When set to false or omitted, the attachment inherits the visibility of its parent record and is accessible to all users who have access to that record.\n- Directly affects the attachment’s visibility scope and access control within Salesforce.\n- Changes to this field immediately impact user access permissions for the attachment.\n- Does not override broader organizational sharing settings but further restricts access.\n- The privacy setting applies in conjunction with existing record-level and object-level security controls.\n\n**Implementation guidance**\n- Use this field to enforce strict data privacy and protect sensitive attachments in compliance with organizational policies and regulatory requirements.\n- Ensure that user roles, profiles, and sharing rules are properly configured to respect this privacy setting.\n- Validate that the value is a boolean (true or false) to prevent data inconsistencies.\n- Evaluate the implications on existing sharing rules and record-level security before modifying this field.\n- Consider implementing audit logging to track changes to this privacy setting for security and compliance purposes.\n- Test changes in a non-production environment to understand the impact on user access before deploying to production.\n- Communicate changes to relevant stakeholders to avoid confusion or unintended access issues.\n\n**Examples**\n- `true` — The attachment is private and visible only to users explicitly authorized to access it.\n- `false` — The attachment is public within the context of the parent record and visible to all users who have access to that record.\n\n**Important notes**\n- Setting this field to true does not grant access to users who lack permission to view the parent record; it only further restricts access.\n- Modifying this setting can significantly affect user access and should be managed carefully to avoid unintended data exposure or access denial.\n- Some Salesforce editions or configurations may have limitations or specific behaviors regarding attachment privacy"},"description":{"type":"string","description":"A textual description providing additional context, detailed information, and clarifying the content, purpose, or relevance of the attachment within Salesforce. This field enhances user understanding and facilitates easier identification, organization, and management of attachments by serving as descriptive metadata that complements the attachment’s core data without containing the attachment content itself. It supports plain text with special characters, punctuation, and full Unicode, enabling expressive, internationalized, and accessible descriptions. Typically displayed in user interfaces alongside attachment details, this description also plays a crucial role in search, filtering, sorting, and reporting operations, improving overall data discoverability and usability.\n\n**Field behavior**\n- Optional and editable field used to describe the attachment’s nature or relevance.\n- Supports plain text input including special characters, punctuation, and Unicode.\n- Displayed prominently in user interfaces where attachment details appear.\n- Utilized in search, filtering, sorting, and reporting functionalities to enhance data retrieval.\n- Does not contain or affect the actual attachment content or file storage.\n\n**Implementation guidance**\n- Encourage concise yet informative descriptions to aid quick identification.\n- Validate and sanitize input to prevent injection attacks and ensure data integrity.\n- Support full Unicode to accommodate international characters and symbols.\n- Enforce a reasonable maximum length (commonly 255 characters) to maintain performance and UI consistency.\n- Avoid inclusion of sensitive or confidential information unless appropriate security controls are applied.\n- Implement trimming or sanitization routines to maintain clean and consistent data.\n\n**Examples**\n- \"Quarterly financial report for Q1 2024.\"\n- \"Signed contract agreement with client XYZ.\"\n- \"Product brochure for the new model launch.\"\n- \"Meeting notes and action items from April 15, 2024.\"\n- \"Technical specification document for project Alpha.\"\n\n**Important notes**\n- This field contains descriptive metadata only, not the attachment content.\n- Sensitive information should be excluded or protected according to security policies.\n- Changes to this field do not impact the attachment file or its storage.\n- Consistent and accurate descriptions improve attachment management and user experience.\n- Plays a key role in enhancing searchability and filtering within Salesforce.\n\n**Dependency chain**\n- Directly associated with the attachment entity in Salesforce.\n- Often used in conjunction with other metadata fields such as attachment name, type, owner, size, and creation date.\n- Supports comprehensive context when combined with related records and metadata.\n\n**Technical details**\n- Data type: string.\n- Maximum length: typically 255 characters, configurable"}},"description":"The attachment property represents a file or document linked to a Salesforce record, encompassing both the binary content of the file and its associated metadata. It serves as a versatile container for a wide variety of file types—including documents, images, PDFs, spreadsheets, and emails—allowing users to enrich Salesforce records such as emails, cases, opportunities, or any standard or custom objects with supplementary information or evidence. This property supports comprehensive lifecycle management, enabling operations such as uploading new files, retrieving existing attachments, updating file details, and deleting attachments as needed. By maintaining a direct association with the parent Salesforce record through identifiers like ParentId, it ensures accurate linkage and seamless integration within the Salesforce ecosystem. The attachment property facilitates enhanced collaboration, documentation, and data completeness by providing a centralized and manageable way to handle files related to Salesforce data.\n\n**Field behavior**\n- Stores the binary content or a reference to a file linked to a specific Salesforce record.\n- Supports multiple file formats including but not limited to documents, images, PDFs, spreadsheets, and email files.\n- Enables attaching additional context, documentation, or evidence to Salesforce objects.\n- Allows full lifecycle management: create, read, update, and delete operations on attachments.\n- Contains metadata such as file name, MIME content type, file size, and the ID of the parent Salesforce record.\n- Reflects changes immediately in the associated Salesforce record’s file attachments.\n- Maintains a direct association with the parent record through the ParentId field to ensure accurate linkage.\n- Supports integration with Salesforce Files (ContentVersion and ContentDocument) for advanced file management features.\n\n**Implementation guidance**\n- Encode file content using Base64 encoding when transmitting via Salesforce APIs to ensure data integrity.\n- Validate file size against Salesforce limits (commonly up to 25 MB per attachment) and confirm supported file types before upload.\n- Use the ParentId field to correctly associate the attachment with the intended Salesforce record.\n- Implement robust permission checks to ensure only authorized users can upload, view, or modify attachments.\n- Consider leveraging Salesforce Files (ContentVersion and ContentDocument objects) for enhanced file management capabilities, versioning, and sharing features, especially for new implementations.\n- Handle error responses gracefully, including those related to file size limits, unsupported formats, or permission issues.\n- Ensure metadata fields such as file name and content type are accurately populated to facilitate proper handling and display.\n- When updating attachments, confirm that the new content or metadata aligns with organizational policies and compliance requirements.\n- Optimize performance"},"contentVersion":{"type":"object","properties":{"contentDocumentId":{"type":"string","description":"contentDocumentId is the unique identifier for the parent Content Document associated with a specific Content Version in Salesforce. It serves as the fundamental reference that links all versions of the same document, enabling comprehensive version control and efficient document management within the Salesforce ecosystem. This ID is immutable once the Content Version is created, ensuring a stable and consistent connection across all iterations of the document. By maintaining this linkage, Salesforce facilitates seamless navigation between individual document versions and their overarching document entity, supporting streamlined retrieval, updates, collaboration, and organizational workflows. This field is automatically assigned by Salesforce during the creation of a Content Version and is critical for maintaining the integrity and traceability of document versions throughout their lifecycle.\n\n**Field behavior**\n- Represents the unique ID of the parent Content Document to which the Content Version belongs.\n- Acts as the primary link connecting multiple versions of the same document.\n- Immutable after the Content Version record is created, preserving data integrity.\n- Enables navigation from a specific Content Version to its parent Content Document.\n- Automatically assigned and managed by Salesforce during Content Version creation.\n- Essential for maintaining consistent version histories and document relationships.\n\n**Implementation guidance**\n- Use this ID to reference or retrieve the entire document across all its versions.\n- Verify that the ID corresponds to an existing Content Document before usage.\n- Utilize in SOQL queries and API calls to associate Content Versions with their parent document.\n- Avoid modifying this field directly; it is managed internally by Salesforce.\n- Employ this ID to support document lifecycle management, version tracking, and collaboration features.\n- Leverage this field to ensure accurate linkage in integrations and custom development.\n\n**Examples**\n- \"0691t00000XXXXXXAAA\"\n- \"0692a00000YYYYYYBBB\"\n- \"0693b00000ZZZZZZCCC\"\n\n**Important notes**\n- Distinct from the Content Version ID, which identifies a specific version rather than the entire document.\n- Mandatory for any valid Content Version record to ensure proper document association.\n- Cannot be null or empty for Content Version records.\n- Critical for preserving document integrity, version tracking, and enabling collaborative document management.\n- Changes to this ID are not permitted post-creation to avoid breaking version linkages.\n- Plays a key role in Salesforce’s document sharing, access control, and audit capabilities.\n\n**Dependency chain**\n- Requires the existence of a valid Content Document record in Salesforce.\n- Directly related to Content Version records representing different iterations of the same document.\n- Integral"},"title":{"type":"string","description":"The title of the content version serves as the primary name or label assigned to this specific iteration of content within Salesforce. It acts as a key identifier that enables users to quickly recognize, differentiate, and manage various versions of content. The title typically summarizes the subject matter, purpose, or significant details about the content, making it easier to locate and understand at a glance. It should be concise yet sufficiently descriptive to effectively convey the essence and context of the content version. Well-crafted titles improve navigation, searchability, and version tracking within content management workflows, enhancing overall user experience and operational efficiency.\n\n**Field behavior**\n- Functions as the main identifier or label for the content version in user interfaces, listings, and reports.\n- Enables users to distinguish between different versions of the same content efficiently.\n- Reflects the content’s subject, purpose, key attributes, or version status.\n- Should be concise but informative to facilitate quick recognition and comprehension.\n- Commonly displayed in search results, filters, version histories, and content libraries.\n- Supports sorting and filtering operations to streamline content management.\n\n**Implementation guidance**\n- Use clear, descriptive titles that accurately represent the content version’s focus or changes.\n- Avoid special characters or formatting that could cause display, parsing, or processing issues.\n- Keep titles within a reasonable length (typically up to 255 characters) to ensure readability across various UI components and devices.\n- Update the title appropriately when creating new versions to reflect significant updates or revisions.\n- Consider including version numbers, dates, or status indicators to enhance clarity and version tracking.\n- Follow organizational naming conventions and standards to maintain consistency.\n\n**Examples**\n- \"Q2 Financial Report 2024\"\n- \"Product Launch Presentation v3\"\n- \"Employee Handbook - Updated March 2024\"\n- \"Marketing Strategy Overview\"\n- \"Website Redesign Proposal - Final Draft\"\n\n**Important notes**\n- Titles do not need to be globally unique but should be meaningful and contextually relevant.\n- Changing the title does not modify the underlying content data or its integrity.\n- Titles are frequently used in search, filtering, sorting, and reporting operations within Salesforce.\n- Consistent naming conventions and formatting improve content management efficiency and user experience.\n- Consider organizational standards or guidelines when defining title formats.\n- Titles should avoid ambiguity to prevent confusion between similar content versions.\n\n**Dependency chain**\n- Part of the ContentVersion object within Salesforce’s content management framework.\n- Often associated with other metadata fields such as description"},"pathOnClient":{"type":"string","description":"The pathOnClient property specifies the original file path of the content as it exists on the client machine before being uploaded to Salesforce. This path serves as a precise reference to the source location of the file on the user's local system, facilitating identification, auditing, and tracking of the file's origin. It captures either the full absolute path or a relative path depending on the client environment and upload context, accurately reflecting the exact location from which the file was sourced. This information is primarily used for informational, diagnostic, and user interface purposes within Salesforce and does not influence how the file is stored, accessed, or secured in the system.\n\n**Field behavior**\n- Represents the full absolute or relative file path on the client device where the file was stored prior to upload.\n- Automatically populated during file upload processes when the client environment provides this information.\n- Used mainly for reference, auditing, and user interface display to provide context about the file’s origin.\n- May be empty, null, or omitted if the file was not uploaded from a client device or if the path information is unavailable or restricted.\n- Does not influence file storage, retrieval, or access permissions within Salesforce.\n- Retains the original formatting and syntax as provided by the client system at upload time.\n\n**Implementation guidance**\n- Capture the file path exactly as provided by the client system at the time of upload, preserving the original format and case sensitivity.\n- Handle differences in file path syntax across operating systems (e.g., backslashes for Windows, forward slashes for Unix/Linux/macOS).\n- Treat this property strictly as metadata for informational purposes; avoid using it for security, access control, or file retrieval logic.\n- Sanitize or mask sensitive directory or user information when displaying or logging this path to protect user privacy and comply with data protection regulations.\n- Ensure compliance with relevant privacy laws and organizational policies when storing or exposing client file paths.\n- Consider truncating, normalizing, or encoding paths if necessary to conform to platform length limits, display constraints, or to prevent injection vulnerabilities.\n- Provide clear documentation to users and administrators about the non-security nature of this property to prevent misuse.\n\n**Examples**\n- \"C:\\\\Users\\\\JohnDoe\\\\Documents\\\\Report.pdf\"\n- \"/home/janedoe/projects/presentation.pptx\"\n- \"Documents/Invoices/Invoice123.pdf\"\n- \"Desktop/Project Files/DesignMockup.png\"\n- \"../relative/path/to/file.txt\"\n\n**Important notes**\n- This property does not"},"tagCsv":{"type":"string","description":"A comma-separated string representing the tags associated with the content version in Salesforce. This field enables the assignment of multiple descriptive tags within a single string, facilitating efficient categorization, organization, and enhanced searchability of the content. Tags serve as metadata labels that help users filter, locate, and manage content versions effectively across the platform by grouping related items and supporting advanced search and filtering capabilities. Proper use of this field improves content discoverability, streamlines content management workflows, and supports dynamic content classification.\n\n**Field behavior**\n- Accepts multiple tags separated by commas, typically without spaces, though trimming whitespace is recommended.\n- Tags are used to categorize, organize, and improve the discoverability of content versions.\n- Tags can be added, updated, or removed by modifying the entire string value.\n- Duplicate tags within the string should be avoided to maintain clarity and effectiveness.\n- Tags are generally treated as case-insensitive for search purposes but stored exactly as entered.\n- Changes to this field immediately affect how content versions are indexed and retrieved in search operations.\n\n**Implementation guidance**\n- Ensure individual tags do not contain commas to prevent parsing errors.\n- Validate the tagCsv string for invalid characters, excessive length, and proper formatting.\n- When updating tags, replace the entire string to accurately reflect the current tag set.\n- Trim leading and trailing whitespace from each tag before processing or storage.\n- Utilize this field to enhance search filters, categorization, and content management workflows.\n- Implement application-level checks to prevent duplicate or meaningless tags.\n- Consider standardizing tag formats (e.g., lowercase) to maintain consistency across records.\n\n**Examples**\n- \"marketing,Q2,presentation\"\n- \"urgent,clientA,confidential\"\n- \"releaseNotes,v1.2,approved\"\n- \"finance,yearEnd,reviewed\"\n\n**Important notes**\n- The maximum length of the tagCsv string is subject to Salesforce field size limitations.\n- Tags should be meaningful, consistent, and standardized to maximize their utility.\n- Modifications to tags can impact content visibility and access if used in filtering or permission logic.\n- This field does not enforce uniqueness of tags; application-level logic should handle duplicate prevention.\n- Tags are stored as-is; any normalization or case conversion must be handled externally.\n- Improper formatting or invalid characters may cause errors or unexpected behavior in search and filtering.\n\n**Dependency chain**\n- Closely related to contentVersion metadata and search/filtering functionality.\n- May integrate with Salesforce"},"contentLocation":{"type":"string","description":"contentLocation specifies the precise storage location of the content file associated with the Salesforce ContentVersion record. It identifies where the actual file data resides—whether within Salesforce's internal storage infrastructure, an external content management system, or a linked external repository—and governs how the content is accessed, retrieved, and managed throughout its lifecycle. This field plays a crucial role in content delivery, access permissions, and integration workflows by clearly indicating the source location of the file data. Properly setting and interpreting contentLocation ensures that content is handled correctly according to its storage context, enabling seamless access when stored internally or facilitating robust integration and synchronization with third-party external systems.\n\n**Field behavior**\n- Determines the origin and storage context of the content file linked to the ContentVersion record.\n- Influences the mechanisms and protocols used for accessing, retrieving, and managing the content.\n- Typically assigned automatically by Salesforce based on the upload method or integration configuration.\n- Common values include 'S' for Salesforce internal storage, 'E' for external storage systems, and 'L' for linked external locations.\n- Often read-only to preserve data integrity and prevent unauthorized or unintended modifications.\n- Alterations to this field can impact content visibility, sharing settings, and delivery workflows.\n\n**Implementation guidance**\n- Utilize this field to programmatically distinguish between internally stored content and externally managed files.\n- When integrating with external repositories, ensure contentLocation accurately reflects the external storage to enable correct content handling and retrieval.\n- Avoid manual updates unless supported by Salesforce documentation and accompanied by a comprehensive understanding of the consequences.\n- Validate the contentLocation value prior to processing or delivering content to guarantee proper routing and access control.\n- Confirm that external storage systems are correctly configured, authenticated, and authorized to maintain uninterrupted access.\n- Monitor this field closely during content migration or integration activities to prevent data inconsistencies or access disruptions.\n\n**Examples**\n- 'S' — Content physically stored within Salesforce’s internal storage infrastructure.\n- 'E' — Content residing in an external system integrated with Salesforce, such as a third-party content repository.\n- 'L' — Content located in a linked external location, representing specialized or less common storage configurations.\n\n**Important notes**\n- Manual modifications to contentLocation can lead to broken links, access failures, or data inconsistencies.\n- Salesforce generally manages this field automatically to uphold consistency and reliability.\n- The contentLocation value directly influences content delivery methods and access control policies.\n- External content locations require proper configuration, authentication, and permissions to function correctly"}},"description":"The contentVersion property serves as the unique identifier for a specific iteration of a content item within the Salesforce platform's content management system. It enables precise tracking, retrieval, and management of individual versions or revisions of documents, files, or digital assets. Each contentVersion corresponds to a distinct, immutable state of the content at a given point in time, facilitating robust version control, content integrity, and auditability. When a content item is created or modified, Salesforce automatically generates a new contentVersion to capture those changes, allowing users and systems to access, compare, or revert to previous versions as necessary. This property is fundamental for workflows involving content updates, approvals, compliance tracking, and historical content analysis.\n\n**Field behavior**\n- Uniquely identifies a single, immutable version of a content item within Salesforce.\n- Differentiates between multiple revisions or updates of the same content asset.\n- Automatically created or incremented by Salesforce upon content creation or modification.\n- Used in API operations to retrieve, reference, update metadata, or manage specific content versions.\n- Remains constant once assigned; any content changes result in a new contentVersion record.\n- Supports audit trails by preserving historical versions of content.\n\n**Implementation guidance**\n- Ensure synchronization of contentVersion values with Salesforce to maintain consistency across integrated systems.\n- Utilize this property for version-specific operations such as downloading content, updating metadata, or deleting particular versions.\n- Validate contentVersion identifiers against Salesforce API standards to avoid errors or mismatches.\n- Implement comprehensive error handling for scenarios where contentVersion records are missing, deprecated, or inaccessible due to permission restrictions.\n- Consider access control differences at the version level, as visibility and permissions may vary between versions.\n- Leverage contentVersion in workflows requiring version comparison, rollback, or approval processes.\n\n**Examples**\n- \"0681t000000XyzA\" (initial version of a document)\n- \"0681t000000XyzB\" (a subsequent, updated version reflecting content changes)\n- \"0681t000000XyzC\" (a further revised version incorporating additional edits)\n\n**Important notes**\n- contentVersion is distinct from contentDocumentId; contentVersion identifies a specific version, whereas contentDocumentId refers to the overall document entity.\n- Proper versioning is critical for maintaining content integrity, supporting audit trails, and ensuring regulatory compliance.\n- Access permissions and visibility can differ between content versions, potentially affecting user access and operations.\n- The contentVersion ID typically begins with the prefix"}}},"File":{"type":"object","description":"**CRITICAL: This object is REQUIRED for all file-based import adaptor types.**\n\n**When to include this object**\n\n✅ **MUST SET** when `adaptorType` is one of:\n- `S3Import`\n- `FTPImport`\n- `AS2Import`\n\n❌ **DO NOT SET** for non-file-based imports like:\n- `SalesforceImport`\n- `NetSuiteImport`\n- `HTTPImport`\n- `MongodbImport`\n- `RDBMSImport`\n\n**Minimum required fields**\n\nFor most file imports, you need at minimum:\n- `fileName`: The output file name (supports Handlebars like `{{timestamp}}`)\n- `type`: The file format (json, csv, xml, xlsx)\n- `skipAggregation`: Usually `false` for standard imports\n\n**Example (S3 Import)**\n\n```json\n{\n  \"file\": {\n    \"fileName\": \"customers-{{timestamp}}.json\",\n    \"skipAggregation\": false,\n    \"type\": \"json\"\n  },\n  \"s3\": {\n    \"region\": \"us-east-1\",\n    \"bucket\": \"my-bucket\",\n    \"fileKey\": \"customers-{{timestamp}}.json\"\n  },\n  \"adaptorType\": \"S3Import\"\n}\n```","properties":{"fileName":{"type":"string","description":"**REQUIRED for file-based imports.**\n\nThe name of the file to be created/written. Supports Handlebars expressions for dynamic naming.\n\n**Default behavior**\n- When the user does not specify a file naming convention, **always default to timestamped filenames** using `{{timestamp}}` (e.g., `\"items-{{timestamp}}.csv\"`). This ensures each run produces a unique file and avoids overwriting previous exports.\n- Only use a fixed filename (without timestamp) if the user explicitly requests overwriting or a fixed name.\n\n**Common patterns**\n- `\"data-{{timestamp}}.json\"` - Timestamped JSON file (DEFAULT — use this pattern when not specified)\n- `\"export-{{date}}.csv\"` - Date-stamped CSV file\n- `\"{{recordType}}-backup.xml\"` - Dynamic record type naming\n\n**Examples**\n- `\"customers-{{timestamp}}.json\"`\n- `\"orders-export.csv\"`\n- `\"inventory-{{date}}.xlsx\"`\n\n**Important**\n- For S3 imports, this should typically match the `s3.fileKey` value\n- For FTP imports, this should typically match the `ftp.fileName` value"},"skipAggregation":{"type":"boolean","description":"Controls whether records are aggregated into a single file or processed individually.\n\n**Values**\n- `false` (DEFAULT): Records are aggregated into a single output file\n- `true`: Each record is written as a separate file\n\n**When to use**\n- **`false`**: Standard batch imports where all records go into one file\n- **`true`**: When each record needs its own file (e.g., individual documents)\n\n**Default:** `false`","default":false},"type":{"type":"string","enum":["json","csv","xml","xlsx","filedefinition"],"description":"**REQUIRED for file-based imports.**\n\nThe format of the output file.\n\n**Options**\n- `\"json\"`: JSON format (most common for API data)\n- `\"csv\"`: Comma-separated values (tabular data)\n- `\"xml\"`: XML format\n- `\"xlsx\"`: Excel spreadsheet format\n- `\"filedefinition\"`: Custom file definition format\n\n**Selection guidance**\n- Use `\"json\"` for structured/nested data, API payloads\n- Use `\"csv\"` for flat tabular data, spreadsheet-like records\n- Use `\"xml\"` for XML-based integrations, SOAP services\n- Use `\"xlsx\"` for Excel-compatible outputs"},"encoding":{"type":"string","enum":["utf8","win1252","utf-16le","gb18030","macroman","iso88591","shiftjis"],"description":"Character encoding for the output file.\n\n**Default:** `\"utf8\"`\n\n**When to change**\n- `\"win1252\"`: Legacy Windows systems\n- `\"utf-16le\"`: Unicode with BOM requirements\n- `\"gb18030\"`: Chinese character sets\n- `\"shiftjis\"`: Japanese character sets","default":"utf8"},"delete":{"type":"boolean","description":"Whether to delete the source file after successful import.\n\n**Values**\n- `true`: Delete source file after processing\n- `false`: Keep source file\n\n**Default:** `false`","default":false},"compressionFormat":{"type":"string","enum":["gzip","zip"],"description":"Compression format for the output file.\n\n**Options**\n- `\"gzip\"`: GZIP compression (.gz)\n- `\"zip\"`: ZIP compression (.zip)\n\n**When to use**\n- Large files that benefit from compression\n- When target system expects compressed files"},"backupPath":{"type":"string","description":"Path where backup copies of files should be stored.\n\n**Examples**\n- `\"backup/\"` - Relative backup folder\n- `\"/archive/2024/\"` - Absolute backup path"},"purgeInternalBackup":{"type":"boolean","description":"Whether to purge internal backup copies after successful processing.\n\n**Default:** `false`","default":false},"batchSize":{"type":"integer","description":"Number of records to include per batch/file when processing large datasets.\n\n**When to use**\n- Large imports that need to be split into multiple files\n- When target system has file size limitations\n\n**Note**\n- Only applicable when `skipAggregation` is `true`"},"encrypt":{"type":"boolean","description":"Whether to encrypt the output file.\n\n**Values**\n- `true`: Encrypt the file (requires PGP configuration)\n- `false`: No encryption\n\n**Default:** `false`","default":false},"csv":{"type":"object","description":"CSV-specific configuration. Only used when `type` is `\"csv\"`.","properties":{"rowDelimiter":{"type":"string","description":"Character(s) used to separate rows. Default is newline.","default":"\n"},"columnDelimiter":{"type":"string","description":"Character(s) used to separate columns. Default is comma.","default":","},"includeHeader":{"type":"boolean","description":"Whether to include header row with column names.","default":true},"wrapWithQuotes":{"type":"boolean","description":"Whether to wrap field values in quotes.","default":false},"replaceTabWithSpace":{"type":"boolean","description":"Replace tab characters with spaces.","default":false},"replaceNewlineWithSpace":{"type":"boolean","description":"Replace newline characters with spaces within fields.","default":false},"truncateLastRowDelimiter":{"type":"boolean","description":"Remove trailing row delimiter from the file.","default":false}}},"json":{"type":"object","description":"JSON-specific configuration. Only used when `type` is `\"json\"`.","properties":{"resourcePath":{"type":"string","description":"JSONPath expression to locate records within the JSON structure."}}},"xml":{"type":"object","description":"XML-specific configuration. Only used when `type` is `\"xml\"`.","properties":{"resourcePath":{"type":"string","description":"XPath expression to locate records within the XML structure."}}},"fileDefinition":{"type":"object","description":"Configuration settings for parsing files using a predefined file definition. This object enables processing of complex, non-standard, or proprietary file formats that require specialized parsing logic beyond what the standard parsers (CSV, JSON, XML, etc.) can handle.\n\n**When to use**\n\nConfigure this object when the `type` field is set to \"filedefinition\". This approach is required for properly handling:\n- Legacy or proprietary file formats with complex structures\n- Fixed-width text files where field positions are defined by character positions\n- Electronic Data Interchange (EDI) documents (X12, EDIFACT, etc.)\n- Multi-record type files where different lines have different formats\n- Files requiring complex preprocessing or custom parsing logic\n\n**File definition characteristics**\n\n- **Custom Parsing Rules**: Applies predefined parsing logic to complex file formats\n- **Reusable Configurations**: References externally defined parsing rules that can be reused\n- **Complex Format Support**: Handles formats that standard parsers cannot process\n- **Specialized Processing**: Often used for industry-specific or legacy formats\n\n**Implementation strategy for ai agents**\n\n1. **Format Analysis**:\n    - Determine if the file format is standard (CSV, JSON, XML) or requires custom parsing\n    - Check if the format follows industry standards like EDI, SWIFT, or fixed-width\n    - Assess if there are multiple record types within the same file\n    - Identify if specialized logic is needed to interpret the file structure\n\n2. **File Definition Selection**:\n    - Verify that a suitable file definition has already been created in the system\n    - Check if existing file definitions match the format requirements\n    - Confirm the file definition ID from system administrators if needed\n    - Ensure the file definition is compatible with the export's needs\n","properties":{"_fileDefinitionId":{"type":"string","format":"objectId","description":"Reference to the file definition resource."}}}}},"FileSystem":{"type":"object","description":"Configuration for FileSystem imports","properties":{"directoryPath":{"type":"string","description":"Directory path to read files from (required)"}},"required":["directoryPath"]},"AiAgent":{"type":"object","description":"AI Agent configuration used by both AiAgentImport and GuardrailImport (ai_agent type).\n\nConfigures which AI provider and model to use, along with instructions, parameter\ntuning, output format, and available tools. Two providers are supported:\n\n- **openai**: OpenAI models (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)\n- **gemini**: Google Gemini models via LiteLLM proxy\n\nA `_connectionId` is optional (BYOK). If not provided on the parent import,\nplatform-managed credentials are used.\n","properties":{"provider":{"type":"string","enum":["openai","gemini"],"default":"openai","description":"AI provider to use.\n\n- **openai**: Uses OpenAI Responses API. Configure via the `openai` object.\n- **gemini**: Uses Google Gemini via LiteLLM. Configure via `litellm` with\n  Gemini-specific overrides in `litellm._overrides.gemini`.\n"},"openai":{"type":"object","description":"OpenAI-specific configuration. Used when `provider` is \"openai\".\n","properties":{"instructions":{"type":"string","maxLength":51200,"description":"System prompt that defines the AI agent's behavior, goals, and constraints.\nMaximum 50 KB.\n"},"model":{"type":"string","description":"OpenAI model identifier.\n"},"reasoning":{"type":"object","description":"Controls depth of reasoning for complex tasks.\n","properties":{"effort":{"type":"string","enum":["minimal","low","medium","high"],"description":"How much reasoning effort the model should invest"},"summary":{"type":"string","enum":["concise","auto","detailed"],"description":"Level of detail in reasoning summaries"}}},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature. Higher values (e.g. 1.5) produce more creative output,\nlower values (e.g. 0.2) produce more focused and deterministic output.\n"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"maxOutputTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the model's response"},"serviceTier":{"type":"string","enum":["auto","default","priority"],"default":"default","description":"OpenAI service tier. \"priority\" provides higher rate limits and\nlower latency at increased cost.\n"},"output":{"type":"object","description":"Output format configuration","properties":{"format":{"type":"object","description":"Controls the structure of the model's output.\n","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text","description":"Output format type.\n\n- **text**: Free-form text response\n- **json_schema**: Structured JSON conforming to a schema\n- **blob**: Binary data output\n"},"name":{"type":"string","description":"Name for the output format (used with json_schema)"},"strict":{"type":"boolean","default":false,"description":"Whether to enforce strict schema validation on output"},"jsonSchema":{"type":"object","description":"JSON Schema for structured output. Required when `format.type` is \"json_schema\".\n","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalproperties":{"type":"boolean"}}}}},"verbose":{"type":"string","enum":["low","medium","high"],"default":"medium","description":"Level of detail in the model's response"}}},"tools":{"type":"array","description":"Tools available to the AI agent during processing.\n","items":{"type":"object","properties":{"type":{"type":"string","enum":["web_search","mcp","image_generation","tool"],"description":"Type of tool.\n\n- **web_search**: Search the web for information\n- **mcp**: Connect to an MCP server for additional tools\n- **image_generation**: Generate images\n- **tool**: Reference a Celigo Tool resource\n"},"webSearch":{"type":"object","description":"Web search configuration (empty object to enable)"},"imageGeneration":{"type":"object","description":"Image generation configuration","properties":{"background":{"type":"string","enum":["transparent","opaque"]},"quality":{"type":"string","enum":["low","medium","high"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"]},"outputFormat":{"type":"string","enum":["png","webp","jpeg"]}}},"mcp":{"type":"object","description":"MCP server tool configuration","properties":{"_mcpConnectionId":{"type":"string","format":"objectId","description":"Connection to the MCP server"},"allowedTools":{"type":"array","description":"Specific tools to allow from the MCP server (all if omitted)","items":{"type":"string"}}}},"tool":{"type":"object","description":"Reference to a Celigo Tool resource.\n","properties":{"_toolId":{"type":"string","format":"objectId","description":"Reference to the Tool resource"},"overrides":{"type":"object","description":"Per-agent overrides for the tool's internal resources"}}}}}}}},"litellm":{"type":"object","description":"LiteLLM proxy configuration. Used when `provider` is \"gemini\".\n\nLiteLLM provides a unified interface to multiple AI providers.\nGemini-specific settings are in `_overrides.gemini`.\n","properties":{"model":{"type":"string","description":"LiteLLM model identifier"},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"Sampling temperature"},"maxCompletionTokens":{"type":"number","minimum":100,"maximum":128000,"default":1000,"description":"Maximum number of tokens in the response"},"topP":{"type":"number","minimum":0.1,"maximum":1,"description":"Nucleus sampling parameter"},"seed":{"type":"number","description":"Random seed for reproducible outputs"},"responseFormat":{"type":"object","description":"Output format configuration","properties":{"type":{"type":"string","enum":["text","json_schema","blob"],"default":"text"},"name":{"type":"string"},"strict":{"type":"boolean","default":false},"jsonSchema":{"type":"object","properties":{"type":{"type":"string","enum":["object","array","string","number","integer","boolean"]},"properties":{"type":"object","additionalProperties":true},"required":{"type":"array","items":{"type":"string"}},"additionalProperties":{"type":"boolean"}}}}},"_overrides":{"type":"object","description":"Provider-specific overrides","properties":{"gemini":{"type":"object","description":"Gemini-specific configuration overrides.\n","properties":{"systemInstruction":{"type":"string","maxLength":51200,"description":"System instruction for Gemini models. Equivalent to OpenAI's `instructions`.\nMaximum 50 KB.\n"},"tools":{"type":"array","description":"Gemini-specific tools","items":{"type":"object","properties":{"type":{"type":"string","enum":["googleSearch","urlContext","fileSearch","mcp","tool"],"description":"Type of Gemini tool.\n\n- **googleSearch**: Google Search grounding\n- **urlContext**: URL content retrieval\n- **fileSearch**: Search uploaded files\n- **mcp**: Connect to an MCP server\n- **tool**: Reference a Celigo Tool resource\n"},"googleSearch":{"type":"object","description":"Google Search configuration (empty object to enable)"},"urlContext":{"type":"object","description":"URL context configuration (empty object to enable)"},"fileSearch":{"type":"object","properties":{"fileSearchStoreNames":{"type":"array","items":{"type":"string"}}}},"mcp":{"type":"object","properties":{"_mcpConnectionId":{"type":"string","format":"objectId"},"allowedTools":{"type":"array","items":{"type":"string"}}}},"tool":{"type":"object","properties":{"_toolId":{"type":"string","format":"objectId"},"overrides":{"type":"object"}}}}}},"responseModalities":{"type":"array","description":"Response output modalities","items":{"type":"string","enum":["text","image"]},"default":["text"]},"topK":{"type":"number","description":"Top-K sampling parameter for Gemini"},"thinkingConfig":{"type":"object","description":"Controls Gemini's extended thinking capabilities","properties":{"includeThoughts":{"type":"boolean","description":"Whether to include thinking steps in the response"},"thinkingBudget":{"type":"number","minimum":100,"maximum":4000,"description":"Maximum tokens allocated for thinking"},"thinkingLevel":{"type":"string","enum":["minimal","low","medium","high"]}}},"imageConfig":{"type":"object","description":"Gemini image generation configuration","properties":{"aspectRatio":{"type":"string","enum":["1:1","2:3","3:2","3:4","4:3","4:5","5:4","9:16","16:9","21:9"]},"imageSize":{"type":"string","enum":["1K","2K","4k"]}}},"mediaResolution":{"type":"string","enum":["low","medium","high"],"description":"Resolution for media inputs (images, video)"}}}}}}}}},"Guardrail":{"type":"object","description":"Configuration for GuardrailImport adaptor type.\n\nGuardrails are safety and compliance checks that can be applied to data\nflowing through integrations. Three types of guardrails are supported:\n\n- **ai_agent**: Uses an AI model to evaluate data against custom instructions\n- **pii**: Detects and optionally masks personally identifiable information\n- **moderation**: Checks content against moderation categories (e.g. `hate`, `violence`, `harassment`, `sexual`, `self_harm`, `illicit`). Use category names exactly as listed in `moderation.categories` — e.g. `hate` (not \"hate speech\"), `violence` (not \"violent content\").\n\nGuardrail imports do not require a `_connectionId` (unless using BYOK for `ai_agent` type).\n","properties":{"type":{"type":"string","enum":["ai_agent","pii","moderation"],"description":"The type of guardrail to apply.\n\n- **ai_agent**: Evaluate data using an AI model with custom instructions.\n  Requires the `aiAgent` sub-configuration.\n- **pii**: Detect personally identifiable information in data.\n  Requires the `pii` sub-configuration with at least one entity type.\n- **moderation**: Check content for harmful categories.\n  Requires the `moderation` sub-configuration with at least one category.\n"},"confidenceThreshold":{"type":"number","minimum":0,"maximum":1,"default":0.7,"description":"Confidence threshold for guardrail detection (0 to 1).\n\nOnly detections with confidence at or above this threshold will be flagged.\nLower values catch more potential issues but may increase false positives.\n"},"aiAgent":{"$ref":"#/components/schemas/AiAgent"},"pii":{"type":"object","description":"Configuration for PII (Personally Identifiable Information) detection.\n\nRequired when `guardrail.type` is \"pii\". At least one entity type must be specified.\n","properties":{"entities":{"type":"array","description":"PII entity types to detect in the data.\n\nAt least one entity must be specified when using PII guardrails.\n","items":{"type":"string","enum":["credit_card_number","card_security_code_cvv_cvc","cryptocurrency_wallet_address","date_and_time","email_address","iban_code","bic_swift_bank_identifier_code","ip_address","location","medical_license_number","national_registration_number","persons_name","phone_number","url","us_bank_account_number","us_drivers_license","us_itin","us_passport_number","us_social_security_number","uk_nhs_number","uk_national_insurance_number","spanish_nif","spanish_nie","italian_fiscal_code","italian_drivers_license","italian_vat_code","italian_passport","italian_identity_card","polish_pesel","finnish_personal_identity_code","singapore_nric_fin","singapore_uen","australian_abn","australian_acn","australian_tfn","australian_medicare","indian_pan","indian_aadhaar","indian_vehicle_registration","indian_voter_id","indian_passport","korean_resident_registration_number"]}},"mask":{"type":"boolean","default":false,"description":"Whether to mask detected PII values in the output.\n\nWhen true, detected PII is replaced with masked values.\nWhen false, PII is only flagged without modification.\n"}}},"moderation":{"type":"object","description":"Configuration for content moderation.\n\nRequired when `guardrail.type` is \"moderation\". At least one category must be specified.\n","properties":{"categories":{"type":"array","description":"Content moderation categories to check for.\n\nAt least one category must be specified when using moderation guardrails.\n","items":{"type":"string","enum":["sexual","sexual_minors","hate","hate_threatening","harassment","harassment_threatening","self_harm","self_harm_intent","self_harm_instructions","violence","violence_graphic","illicit","illicit_violent"]}}}}},"required":["type"]},"OneToMany":{"type":"boolean","description":"Controls whether the resource treats child records within parent records as the primary data units.\n\n**Important: this is not for specifying where records are in an api response**\n\nIf you need to tell an export where to find the array of records in the HTTP response\nbody (e.g. \"the records are at data.items\"), use `http.response.resourcePath` instead.\n`oneToMany` serves a completely different purpose — it operates on records that have\nalready been extracted from the response.\n\n**What oneToMany actually does**\n\nWhen set to true, this field fundamentally changes how record data is processed:\n- The system will \"unwrap\" nested child records from their parent containers\n- Each child record becomes a separate output record for downstream processing\n- The pathToMany field must be set to indicate where these child records are located\n- Parent record fields can still be accessed via a special \"parent\" context\n\nThis is typically used on **lookup exports** (isLookup: true) or **imports** where\nthe incoming records contain nested arrays that need to be fanned out.\n\nCommon scenarios for enabling this option:\n- Processing order line items individually from an order export\n- Handling invoice line items from an invoice export\n- Processing individual transaction lines from journal entries\n- Extracting address records from customer exports\n\nThis setting applies for the duration of the current flow step only and does not affect\nhow data is stored or structured in other flow steps.\n\nIf false (default), the resource processes each top-level record as a single unit.\n","default":false},"PathToMany":{"type":"string","description":"Specifies the JSON path to child records when oneToMany mode is enabled.\n\nThis field is only used when oneToMany is set to true. It defines the exact location\nof child records within the parent record structure using dot notation:\n\n- Simple path: \"items\" for a direct child array field\n- Nested path: \"lines.lineItems\" for a more deeply nested array\n- Multi-level: \"details.items.subitems\" for deeply nested structures\n\nThe system uses this path to:\n- Locate the array of child records within each parent record\n- Extract each array element as a separate record for processing\n- Make both the child record data and parent context available to downstream steps\n\nImportant considerations:\n- The path must point to an array field\n- For row-based data (i.e. where Celigo models this via an array or arrays of objects), this field is not required\n- If the path is invalid or doesn't exist, the resource will report success but process zero records\n- Maximum path depth: 10 levels\n\nThis field must contain a valid JSON path expression using dot notation.\n"},"Filter":{"type":"object","description":"Configuration for selectively processing records based on specified criteria. This object enables\nprecise control over which items are included or excluded from processing operations.\n\n**Filter behavior**\n\nWhen configured, the filter is applied before processing begins:\n- Items that match the filter criteria are processed\n- Items that don't match are completely skipped\n- No partial processing is performed\n\n**Implementation approaches**\n\nThere are two distinct filtering mechanisms available:\n\n**Rule-Based Filtering (`type: \"expression\"`)**\n- **Best For**: Common filtering patterns based on standard attributes\n- **Capabilities**: Filter by names, values, dates, numerical ranges, text patterns\n- **Advantages**: Declarative, no coding required, consistent performance\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear, static criteria for selection\n\n**Script-Based Filtering (`type: \"script\"`)**\n- **Best For**: Complex logic, dynamic criteria, or business rules\n- **Capabilities**: Full programmatic control, access to complete metadata\n- **Advantages**: Maximum flexibility, can implement any filtering logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Simple rules aren't sufficient or logic needs to be dynamic\n","properties":{"type":{"type":"string","description":"Determines which filtering mechanism to use. This choice affects which properties\nmust be configured and how filtering logic is implemented.\n\n**Available types**\n\n**Rule-Based Filtering (`\"expression\"`)**\n- **Required Config**: The `expression` object with rule definitions\n- **Behavior**: Evaluates declarative rules against item attributes\n- **Best For**: Common patterns like name matching, date ranges, value limits\n- **Advantages**: Simpler to configure, no custom code required\n\n**Script-Based Filtering (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to determine which items to process\n- **Best For**: Complex conditions, business logic, dynamic criteria\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard filtering needs (name, size, date), use `\"expression\"`\n2. For complex logic or conditions not covered by expressions, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based filtering. This object enables filtering\nitems based on common attributes without requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define filtering rules that\ncan match against item attributes like name, type, value, date, and other properties.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Rules can be combined with AND/OR logic\n- Each rule can check a specific attribute\n- Multiple conditions can be applied (ranges, pattern matching, exact matches)\n\n**Common filter patterns**\n\n1. **Pattern matching**: Using wildcards like `*` and `?`\n2. **Value range filtering**: Numbers between min and max values\n3. **Date range filtering**: Items created/modified within specific time ranges\n4. **Status checking**: Items with specific status values or properties\n\nFor AI agents: Rule-based filtering should be your first choice when the filtering criteria\ncan be expressed in terms of standard attributes. Only use script-based filtering when\nmore complex logic is required.\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"1\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"1\"\nfor current implementations.\n","enum":["1"]},"rules":{"type":"array","description":"Expression array defining filter conditions using prefix notation. The first element is the operator,\nfollowed by its operands which may themselves be nested expression arrays.\n\nThe rule expression follows this pattern:\n- First element: Operator name (string)\n- Remaining elements: Operands for that operator (values or nested expressions)\n\n**Expression structure**\n\nFilter expressions use a prefix notation where operators appear before their operands:\n```\n[operator, operand1, operand2, ...]\n```\n\n**Comparison Operators**\n- `\"equals\"`: Exact match (equals)\n- `\"notequals\"`: Not equal to value (not equals)\n- `\"greaterthan\"`: Value is greater than specified value (is greater than)\n- `\"greaterthanequals\"`: Value is greater than or equal to specified value (is greater than or equals)\n- `\"lessthan\"`: Value is less than specified value (is less than)\n- `\"lessthanequals\"`: Value is less than or equal to specified value (is less than or equals)\n- `\"startswith\"`: String starts with specified prefix (starts with)\n- `\"endswith\"`: String ends with specified suffix (ends with)\n- `\"contains\"`: String contains specified substring (contains)\n- `\"doesnotcontain\"`: String does not contain specified substring (does not contain)\n- `\"isempty\"`: Field is empty or null (is empty)\n- `\"isnotempty\"`: Field contains a value (is not empty)\n- `\"matches\"`: Matches specified pattern (matches)\n\n**Logical Operators**\n- `\"and\"`: All conditions must be true\n- `\"or\"`: At least one condition must be true\n- `\"not\"`: Negates the condition\n\n**Field Access and Type Conversion**\n- `\"extract\"`: Access a field from the item by name\n- `\"settings\"`: Access a custom setting from the flow, flow step, or integration configuration\n- `\"boolean\"`: Convert value to Boolean type\n- `\"epochtime\"`: Convert value to Epoch Time (Unix timestamp)\n- `\"number\"`: Convert value to Number type\n- `\"string\"`: Convert value to String type\n\n**Field Access Details**\n\n**Using `extract` to access record fields:**\n- Retrieves values from the current record being processed\n- Can access nested properties using dot notation (e.g., `\"customer.email\"`)\n- Returns the raw field value which may need type conversion\n\n**Using `settings` to access configuration values:**\n- Retrieves values from the integration's configuration settings\n- Supports different scopes with prefix notation:\n  - `flow.settingName`: Access flow-level settings\n  - `export.settingName`: Access export-level settings\n  - `import.settingName`: Access import-level settings\n  - `integration.settingName`: Access integration-level settings\n- Useful for dynamic filtering based on configuration\n\n**Field Transformations**\n- `\"lowercase\"`: Convert string to lowercase\n- `\"uppercase\"`: Convert string to uppercase\n- `\"ceiling\"`: Round number up to the nearest integer\n- `\"floor\"`: Round number down to the nearest integer\n- `\"abs\"`: Get absolute value of a number\n\nType conversion operators are often necessary when comparing extracted field values against literals or when the field type doesn't match the comparison operator's expected type. For example:\n\n```json\n[\n  \"equals\",\n  [\n    \"number\",  // Convert to number before comparison\n    [\n      \"extract\",\n      \"quantity\"\n    ]\n  ],\n  100\n]\n```\n\nExample with datetime conversion:\n```json\n[\n  \"greaterthan\",\n  [\n    \"epochtime\",  // Convert to Unix timestamp before comparison\n    [\n      \"extract\",\n      \"createdDate\"\n    ]\n  ],\n  1609459200000  // January 1, 2021 as Unix timestamp in milliseconds\n]\n```\n\nExample with transformations:\n```json\n[\n  \"and\",\n  [\n    \"matches\",\n    [\n      \"lowercase\",  // Convert to lowercase before matching\n      [\n        \"string\",\n        [\n          \"extract\",\n          \"categories\"\n        ]\n      ]\n    ],\n    \"netsuite\"\n  ],\n  [\n    \"notequals\",\n    [\n      \"string\",\n      [\n        \"extract\",\n        \"recurrence.pattern.type\"\n      ]\n    ],\n    \"\"\n  ]\n]\n```\n\nExample comparing a record field with a flow setting:\n```json\n[\n  \"equals\",\n  [\n    \"string\",\n    [\n      \"extract\",\n      \"trantype\"\n    ]\n  ],\n  [\n    \"string\",\n    [\n      \"settings\",\n      \"flow.trantype\"\n    ]\n  ]\n]\n```\n\n**Examples**\n\nExample 1: Status field is not equal to \"cancelled\"\n```json\n[\n  \"notequals\",\n  [\n    \"extract\",\n    \"status\"\n  ],\n  \"cancelled\"\n]\n```\n\nExample 2: Filename starts with \"HC\"\n```json\n[\n  \"startswith\",\n  [\n    \"extract\",\n    \"filename\"\n  ],\n  \"HC\"\n]\n```\n\nExample 3: Amount is greater than 100\n```json\n[\n  \"greaterthan\",\n  [\n    \"number\",\n    [\n      \"extract\",\n      \"amount\"\n    ]\n  ],\n  100\n]\n```\n\nExample 4: Order date is after January 1, 2023\n```json\n[\n  \"greaterthan\",\n  [\n    \"extract\",\n    \"orderDate\"\n  ],\n  \"2023-01-01T00:00:00Z\"\n]\n```\n\nExample 5: Category contains any of [\"Urgent\", \"High Priority\"]\n```json\n[\n  \"anyof\",\n  [\n    \"extract\",\n    \"category\"\n  ],\n  [\"Urgent\", \"High Priority\"]\n]\n```\n","items":{"oneOf":[{"title":"String","type":"string"},{"title":"Number","type":"number"},{"title":"Boolean","type":"boolean"},{"title":"Object","type":"object"},{"title":"Array","type":"array"}]}}}},"script":{"type":"object","description":"Configuration for programmable script-based filtering. This object enables complex, custom\nfiltering logic beyond what expression-based filtering can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `filter.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to determine which items\nshould be processed.\n\n**Implementation approach**\n\nScript-based filtering works by:\n1. Executing the specified function from the referenced script\n2. Passing item data to the function\n3. Using the function's return value (true/false) to determine inclusion\n\n**Common use cases**\n\nScript filtering is ideal for:\n- Complex business logic that can't be expressed as simple rules\n- Dynamic filtering criteria that change based on external factors\n- Content-based filtering that requires deep inspection\n- Advanced pattern matching beyond simple wildcards\n- Multi-stage filtering with intermediate logic\n\nFor AI agents: Only use script-based filtering when expression-based filtering is insufficient.\nScript filtering requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to the Script resource that contains the filtering logic. This must be a valid\nObjectId of a Script resource that exists in the system.\n\nThe referenced script must contain the function specified in the `function` field\nand must be written to handle filtering specifically. The script receives\nitem data as its input and must return a boolean value indicating whether\nto process the item (true) or skip it (false).\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n"},"function":{"type":"string","description":"Name of the function within the script to execute for filtering decisions. This function\nmust exist in the script referenced by _scriptId.\n\n**Function requirements**\n\nThe specified function must:\n- Accept item data as its first parameter\n- Return a boolean value (true to process the item, false to skip it)\n- Handle errors gracefully\n- Execute efficiently (as it may run for many items)\n\n**Function signature**\n\n```javascript\nfunction filterItems(itemData) {\n  // itemData contains properties of the item being evaluated\n  // Custom logic here\n  return true; // or false to skip the item\n}\n```\n\nFor AI agents: Ensure the function name exactly matches a function defined in the\nreferenced script, as mismatches will cause the filter to fail.\n"}}}}},"Transform":{"type":"object","description":"Configuration for transforming data during processing operations. This object enables\nreshaping of records.\n\n**Transformation capabilities**\n\nCeligo's transformation engine offers powerful features for data manipulation:\n- Precise field mapping with JSONPath expressions\n- Support for any level of nested arrays\n- Formula-based field value generation\n- Dynamic references to flow and integration settings\n\n**Implementation approaches**\n\nThere are two distinct transformation mechanisms available:\n\n**Rule-Based Transformation (`type: \"expression\"`)**\n- **Best For**: Most transformation scenarios from simple to complex\n- **Capabilities**: Field mapping, formula calculations, lookups, nested data handling\n- **Advantages**: Visual configuration, no coding required, intuitive interface\n- **Configuration**: Define rules in the `expression` object\n- **Use When**: You have clear mapping requirements or need to reshape data structure\n\n**Script-Based Transformation (`type: \"script\"`)**\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Capabilities**: Full programmatic control, custom processing, complex business rules\n- **Advantages**: Maximum flexibility, can implement any transformation logic\n- **Configuration**: Reference a script in the `script` object\n- **Use When**: Visual transformation tools aren't sufficient for your use case\n","properties":{"type":{"type":"string","description":"Determines which transformation mechanism to use. This choice affects which properties\nmust be configured and how transformation logic is implemented.\n\n**Available types**\n\n**Rule-Based Transformation (`\"expression\"`)**\n- **Required Config**: The `expression` object with mapping definitions\n- **Behavior**: Applies declarative rules to reshape data\n- **Best For**: Most transformation scenarios from simple to complex\n- **Advantages**: Visual configuration, no coding required\n\n**Script-Based Transformation (`\"script\"`)**\n- **Required Config**: The `script` object with _scriptId and function\n- **Behavior**: Executes custom JavaScript to transform data\n- **Best For**: Extremely complex logic or proprietary algorithms\n- **Advantages**: Maximum flexibility, can implement any logic\n\n**Implementation guidance**\n\n1. For standard data transformations, use `\"expression\"`\n2. For complex logic or specialized processing, use `\"script\"`\n3. When selecting a type, you must configure the corresponding object:\n    - `type: \"expression\"` requires the `expression` object\n    - `type: \"script\"` requires the `script` object\n","enum":["expression","script"]},"expression":{"type":"object","description":"Configuration for declarative rule-based transformations. This object enables reshaping data\nwithout requiring custom code.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"expression\" and should not be\nconfigured otherwise. It provides a standardized way to define transformation rules that\ncan map, modify, and generate data elements.\n\n**Implementation guidance**\n\nThe expression system uses a rule-based approach where:\n- Field mappings define how input data is transformed to target fields\n- Formulas can be used to calculate or generate new values\n- Lookups can enrich data by fetching related information\n- Mode determines how records are processed (create new or modify existing)\n","properties":{"version":{"type":"string","description":"Version identifier for the expression format. Currently only version \"2\" is supported.\n\nThis field ensures future compatibility if the expression format evolves. Always set to \"2\"\nfor current implementations.\n","enum":["2"]},"rulesTwoDotZero":{"type":"object","description":"Configuration for version 2 transformation rules. This object contains the core logic\nfor how data is mapped, enriched, and transformed.\n\n**Capabilities**\n\nTransformation 2.0 provides:\n- Precise field mapping with JSONPath expressions\n- Support for deeply nested data structures\n- Formula-based field generation\n- Dynamic lookups for data enrichment\n- Multiple operating modes to fit different scenarios\n","properties":{"mode":{"type":"string","description":"Transformation mode that determines how records are handled during processing.\n\n**Available modes**\n\n**Create Mode (`\"create\"`)**\n- **Behavior**: Builds entirely new output records from inputs\n- **Use When**: Output structure differs significantly from input\n- **Advantage**: Clean slate approach, no field inheritance\n\n**Modify Mode (`\"modify\"`)**\n- **Behavior**: Makes targeted edits to existing records\n- **Use When**: Output structure should remain similar to input\n- **Advantage**: Preserves unmapped fields from the original record\n","enum":["create","modify"]},"mappings":{"$ref":"#/components/schemas/Mappings"},"lookups":{"allOf":[{"description":"Shared lookup tables used across all mappings defined in the transformation rules.\n\n**Purpose**\n\nLookups provide centralized value translation that can be referenced from any mapping\nin your transformation configuration. They enable consistent translation of codes, IDs,\nand values between systems without duplicating translation logic.\n\n**Usage in transformations**\n\nLookups are particularly valuable in transformations for:\n\n- **Data Normalization**: Standardizing values from diverse source systems\n- **Code Translation**: Converting between different coding systems (e.g., status codes)\n- **Field Enrichment**: Adding descriptive values based on ID or code lookups\n- **Cross-Reference Resolution**: Mapping identifiers between integrated systems\n\n**Implementation**\n\nLookups are defined once in this array and referenced by name in mappings:\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"statusMapping\",\n    \"map\": {\n      \"A\": \"Active\",\n      \"I\": \"Inactive\",\n      \"P\": \"Pending\"\n    },\n    \"default\": \"Unknown Status\"\n  }\n]\n```\n\nThen referenced in mappings using the lookupName property:\n\n```json\n{\n  \"generate\": \"status\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.statusCode\",\n  \"lookupName\": \"statusMapping\"\n}\n```\n\nThe system automatically applies the lookup during transformation processing.\n\nFor complete details on lookup properties and behavior, see the Lookups schema.\n"},{"$ref":"#/components/schemas/Lookups"}]}}}}},"script":{"type":"object","description":"Configuration for programmable script-based transformations. This object enables complex, custom\ntransformation logic beyond what expression-based transformations can provide.\n\n**Usage context**\n\nThis object is REQUIRED when `transform.type` is set to \"script\" and should not be configured\notherwise. It provides a way to execute custom JavaScript code to transform data according to\nspecialized business rules or complex algorithms.\n\n**Implementation approach**\n\nScript-based transformation works by:\n1. Executing the specified function from the referenced script\n2. Passing input data to the function\n3. Using the function's return value as the transformed output\n\n**Common use cases**\n\nScript transformation is ideal for:\n- Complex business logic that can't be expressed through mappings\n- Algorithmic transformations requiring computation\n- Dynamic transformations based on external factors\n- Legacy system data format compatibility\n- Multi-stage processing with intermediate steps\n\nOnly use script-based transformation when expression-based transformation is insufficient.\nScript transformation requires maintaining custom code, which adds complexity to the integration.\n","properties":{"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the transformation logic.\n\nThe referenced script should contain the function specified in the\n'function' property.\n","format":"objectid"},"function":{"type":"string","description":"Name of the function within the script to execute for transformation. This function\nmust exist in the script referenced by _scriptId.\n"}}}}},"Mappings":{"type":"array","description":"Array of field mapping configurations for transforming data from one format into another.\n\n**Guidance**\n\nThis schema is designed around RECURSION as its core architectural principle. Understanding this recursive\nnature is essential for building effective mappings:\n\n1. The schema is self-referential by design - a mapping can contain nested mappings of the same structure\n2. Complex data structures (nested objects, arrays of objects, arrays of arrays of objects) are ALL\n   handled through this recursive pattern\n3. Each mapping handles one level of the data structure; deeper levels are handled by nested mappings\n\nWhen generating mappings programmatically:\n- For simple fields (string, number, boolean): Create single mapping objects\n- For objects: Create a parent mapping with nested 'mappings' array containing child field mappings\n- For arrays: Use 'buildArrayHelper' with extract paths defining array inputs and\n  recursive 'mappings' to define object structures\n\nThe system will process these nested structures recursively during runtime, ensuring proper construction\nof complex hierarchical data while maintaining excellent performance.\n","items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}}},"items":{"type":"object","properties":{"generate":{"type":"string","description":"**Purpose**\nDefines the target field name in the output object/record.\n\n**Guidance**\nThis is the PRIMARY FIELD that identifies the output property being created:\n\n- For regular fields: Set to the exact property name (e.g., \"firstName\", \"price\", \"isActive\")\n- For object fields: Set to the object property name, then add child mappings in the 'mappings' array\n- For array fields: Set to the array property name, then configure 'buildArrayHelper'\n- For arrays within arrays: Leave EMPTY for the inner array mappings, as they don't have field names\n\nIMPORTANT: Do NOT use dot notation (e.g., \"customer.firstName\") in this field. Instead, create proper\nhierarchical structure with nested mappings:\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"status\": \"Active\",\n  \"mappings\": [\n    {\"generate\": \"firstName\", \"dataType\": \"string\", \"extract\": \"$.name.first\", \"status\": \"Active\"}\n  ]\n}\n```\n\nWhen parsing existing mappings, empty 'generate' fields almost always indicate inner array structures\nwithin a parent array.\n"},"dataType":{"type":"string","description":"**Purpose**\nExplicitly declares the data type of the output field, controlling how data is processed and structured.\n\n**Guidance**\nThis is a REQUIRED field that fundamentally determines mapping behavior:\n\n**Simple Types (direct value mapping)**\n- `string`: Text values, converts other types to string representation\n- `number`: Numeric values, attempts conversion from strings\n- `boolean`: True/false values, converts truthy/falsy values\n\nDates are represented as strings — use `string` for date fields and\ndrive the parsing/formatting through the `extractDateFormat` /\n`generateDateFormat` / `extractDateTimezone` / `generateDateTimezone`\nfields. There is no separate `date` enum value.\n\n**Complex Types (require additional configuration)**\n- `object`: Creates a nested object. REQUIRES child mappings in the 'mappings' array\n\n**Array Types**\n- `stringarray`: Array of strings\n- `numberarray`: Array of numbers\n- `booleanarray`: Array of booleans\n- `objectarray`: Array of objects (most common array type)\n- `arrayarray`: Array of arrays (for matrix/table structures)\n\nArray dataTypes can be populated two ways: pass a source array through\nunchanged via `extract` alone (when the source is already an array of\nthe right shape), or construct/iterate via `buildArrayHelper`.\n\nIMPORTANT: The dataType controls which additional fields are relevant:\n- For date-like string fields: extractDateFormat, generateDateFormat, etc. become relevant\n- For object types: 'mappings' array becomes relevant\n- For array types: `buildArrayHelper` is one option (see above)\n\nWhen analyzing existing mappings or generating new ones, always check dataType first\nto understand what additional fields should be present.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"extract":{"type":"string","description":"**Purpose**\nDefines how to retrieve data from the input record to populate the output field.\n\n**Guidance**\nThis field supports THREE DISTINCT PATTERNS that are easily distinguished:\n\n**1. json Path Syntax**\n- MUST start with '$.' prefix\n- Used for precisely targeting data in structured JSON objects\n- Examples: '$.customer.firstName', '$.items[0].price', '$.addresses[*].street'\n- Wildcards like [*] extract multiple values/objects\n\n```json\n\"extract\": \"$.customer.addresses[*]\"  // Extracts all addresses\n```\n\n**2. Handlebars Template Syntax**\n- Contains '{{' and '}}' pattern\n- Evaluated by the AFE 2.0 handlebars template engine\n- Can include logic, formatting, and computation\n- Access input record fields with {{record.fieldName}} notation\n- Examples: \"{{record.firstName}} {{record.lastName}}\", \"{{#if record.isActive}}Active{{else}}Inactive{{/if}}\"\n\n```json\n\"extract\": \"{{record.price}} {{record.currency}}\"  // Combines two fields\n```\n\n**3. Hard-Coded Value (literal string)**\n- Does NOT start with '$.'\n- Does NOT contain handlebars '{{' syntax\n- System treats it as a literal string value\n- VERY COMMON for setting static/constant values\n- Examples: \"Active\", \"USD\", \"Completed\", \"true\"\n\n```json\n\"extract\": \"primary\"  // Sets field value to the literal string \"primary\"\n\"extract\": \"true\"     // Sets field value to the literal string \"true\"\n\"extract\": \"N/A\"      // Sets field value to the literal string \"N/A\"\n```\n\nThis third pattern is the simplest and most efficient way to set hard-coded values in your mappings.\nAI agents should use this pattern whenever a field needs a static value that doesn't come from\nthe input record or require computation.\n\n**Important implementation details**\n\n- JSON path patterns ALWAYS execute from the TOP-LEVEL root of the input record\n- The system maintains this context even in deeply nested mappings\n- For object mappings without child mappings, extract should return a complete object\n- When both extract and mappings are defined for objects, extract is applied first\n\nFor most simple field-to-field mappings, prefer JSON path syntax for its clarity and performance.\nFor hard-coded values, simply use the literal string as the extract value.\n"},"extractDateFormat":{"type":"string","description":"Specifies the format pattern of the input date string to ensure proper parsing.\n\nUsed on string-typed mappings whose `extract` yields a date. Uses\nMoment.js-compatible formatting tokens to describe how the incoming date\nstring is structured.\n"},"extractDateTimezone":{"type":"string","description":"Specifies the timezone of the input date string using Olson/IANA timezone identifiers.\n\nUsed on string-typed mappings whose `extract` yields a date; tells the system\nhow to interpret timestamp values from the input system.\n"},"generateDateFormat":{"type":"string","description":"Specifies the output format pattern when generating a date string or converting\nfrom a Date type to String type.\n\nUses Moment.js-compatible formatting tokens to define the structure of the resulting\ndate string.\n"},"generateDateTimezone":{"type":"string","description":"Specifies the timezone to apply when generating or converting timestamp values\nusing Olson/IANA timezone identifiers.\n\nControls timezone conversion when producing date output.\n"},"default":{"type":"string","description":"Specifies a fallback value to use when extract returns empty/null or when conditional\nlogic fails and no other mapping supplies a value.\n\nThis ensures the output field always has a value, even when input data is missing.\n"},"lookupName":{"type":"string","description":"**Purpose**\nReferences a lookup table for transforming values during the mapping process.\n\n**Usage**\n\nThe lookupName refers to a named lookup defined in the lookups array of the same resource.\n\n```json\n{\n  \"generate\": \"countryName\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.countryCode\",\n  \"lookupName\": \"countryCodeToName\"\n}\n```\n\nDuring processing, the system:\n1. Extracts the value from the input record (e.g., \"US\")\n2. Finds the lookup table with the specified name\n3. Uses the extracted value as a key in the lookup\n4. Returns the corresponding value (e.g., \"United States\")\n\n**Benefits**\n\n- **Standardization**: Ensures consistent value translation across mappings\n- **Centralization**: Define translations once and reference them in multiple places\n- **Maintainability**: Update all mappings by changing the lookup definition\n- **Readability**: Makes mappings more descriptive and self-documenting\n\nThe specific lookup capabilities depend on the context where mappings are used.\n"},"description":{"type":"string","description":"Optional free-text annotation that appears in the Mapper sidebar to provide context about\nthe mapping's purpose for collaboration and documentation.\n\nHas no functional impact on the mapping behavior.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the value produced by `extract`, before any\nconversion to `dataType`. Same enum as `dataType`. Set on leaf mappings\nonly — parent mappings (with child `mappings` or `buildArrayHelper`)\nhave no extracted value of their own; the children carry their own\n`sourceDataType`.\n\nFor date fields use `string` (JSON represents dates as strings); the\nparsing/formatting lives in `extractDateFormat` / `generateDateFormat` /\n`extractDateTimezone` / `generateDateTimezone`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"mappings":{"type":"array","description":"**Purpose**\nEnables recursive definition of nested object structures through child mapping objects.\n\n**Guidance**\nThis is the KEY FIELD that implements the recursive pattern at the core of this schema:\n\n**When to Use**\n- REQUIRED when dataType = \"object\" (unless you are copying an entire object from the input record)\n- REQUIRED in buildArrayHelper.mappings when defining complex object array elements\n- NEVER used with simple types (string, number, boolean, date)\n\n**Behavior**\n- Each mapping in this array becomes a property of the parent object\n- The full Mappings schema is repeated recursively at each level\n- Can be nested to any depth for complex hierarchical structures\n\n**Context Handling**\n- Each level of nesting changes the mapping CONTEXT for 'generate'\n- The extraction CONTEXT always remains the original input record\n- This means child mappings can pull data from anywhere in the input record\n\n**Common Patterns**\n\n**Nested Objects**\n```json\n{\n  \"generate\": \"customer\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\n      \"generate\": \"contact\",\n      \"dataType\": \"object\",\n      \"mappings\": [\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.customerEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n**Multiple Fields in Object**\n```json\n{\n  \"generate\": \"address\",\n  \"dataType\": \"object\",\n  \"mappings\": [\n    {\"generate\": \"street\", \"dataType\": \"string\", \"extract\": \"$.address.line1\"},\n    {\"generate\": \"city\", \"dataType\": \"string\", \"extract\": \"$.address.city\"},\n    {\"generate\": \"country\", \"dataType\": \"string\", \"extract\": \"$.address.country\"}\n  ]\n}\n```\n\nIMPORTANT: When analyzing or generating mappings, ALWAYS check if parent.dataType = \"object\"\nor if you're inside buildArrayHelper.mappings for objectarray elements. These are the only\nvalid contexts for the mappings array.\n","items":{"$ref":"#/components/schemas/items"}},"buildArrayHelper":{"type":"array","description":"**Purpose**\nConfigures how to construct arrays in the output record, handling various array types and inputs.\n\n**Guidance**\nThis is the REQUIRED mechanism for ALL array data types:\n\n**When to Use**\n- REQUIRED when dataType ends with \"array\" (stringarray, objectarray, etc.)\n- Each entry in this array contributes elements to the output array\n- Multiple entries allow combining data from different input arrays\n\n**Array Type Handling**\n\n**For Simple Arrays (stringarray, numberarray, booleanarray)**\n- Only the 'extract' field is used to pull values\n- JSON path with wildcards (e.g., $.items[*].name) returns multiple values\n- Each result is converted to the appropriate primitive type\n```json\n{\n  \"generate\": \"productNames\",\n  \"dataType\": \"stringarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.products[*].name\"}\n  ]\n}\n```\n\n**For Object Arrays (objectarray) - three patterns**\n\n1. Extract Only (existing objects):\n```json\n{\n  \"generate\": \"contacts\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\"extract\": \"$.account.primaryContacts[*]\"},  // Pull primary contact objects\n    {\"extract\": \"$.account.secondaryContacts[*]\"},  // Pull secondary contact objects\n    {\"extract\": \"$.vendor.contactPersons[*]\"},  // Pull vendor contact objects\n    {\"extract\": \"$.subsidiaries[*].mainContact\"}  // Pull main contact from each subsidiary\n  ]\n}\n```\n\n2. Mappings Only (constructed object):\n```json\n{\n  \"generate\": \"contactInfo\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"mappings\": [  // Creates one object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"primary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.primaryEmail\"}\n      ]\n    },\n    {\n      \"mappings\": [  // Creates another object in the array\n        {\"generate\": \"type\", \"dataType\": \"string\", \"extract\": \"secondary\"},\n        {\"generate\": \"email\", \"dataType\": \"string\", \"extract\": \"$.secondaryEmail\"}\n      ]\n    }\n  ]\n}\n```\n\n3. Extract AND Mappings (transform input arrays):\n```json\n{\n  \"generate\": \"lineItems\",\n  \"dataType\": \"objectarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.order.items[*]\",  // For each item in the array\n      \"mappings\": [  // Transform to this structure using the composite object\n        {\"generate\": \"sku\", \"dataType\": \"string\", \"extract\": \"$.order.items.productId\"},  // Notice: items is singular\n        {\"generate\": \"quantity\", \"dataType\": \"number\", \"extract\": \"$.order.items.qty\"},   // Notice: items is singular\n        {\"generate\": \"orderNumber\", \"dataType\": \"string\", \"extract\": \"$.order.id\"},       // Access parent data\n        {\"generate\": \"customerName\", \"dataType\": \"string\", \"extract\": \"$.customerName\"}   // Access root data\n      ]\n    }\n  ]\n}\n```\n\n**For Arrays of Arrays (arrayarray)**\n- Similar to objectarray, but inner arrays have empty 'generate' fields\n- Used for matrix/table structures\n```json\n{\n  \"generate\": \"matrix\",\n  \"dataType\": \"arrayarray\",\n  \"buildArrayHelper\": [\n    {\n      \"extract\": \"$.rows[*]\",  // For each row in the rows array\n      \"mappings\": [\n        {\n          \"dataType\": \"numberarray\",  // Note: No generate field for inner arrays\n          \"buildArrayHelper\": [\n            {\"extract\": \"$.rows.columns[*]\"}  // Notice: \"rows\" is singular in the composite object\n          ]\n        }\n      ]\n    }\n  ]\n}\n```\n\n**Important details**\n\n- When both extract and mappings are provided, the system creates special composite objects\n  that maintain hierarchical context during processing\n- This enables accessing both the current array element AND its parent context\n- Extract paths in buildArrayHelper MUST use JSON path syntax (starting with '$.')\n- Each array helper entry acts independently, potentially adding multiple elements\n\nThe buildArrayHelper is the most complex part of the mappings system - always analyze the\ndataType first to understand which pattern is appropriate.\n","items":{"type":"object","properties":{"extract":{"type":"string","description":"JSON path expression that identifies the input array or values to extract.\n\nFor objectarray with mappings, this defines which input objects to iterate through.\nThe JSON path must return either a single object or an array of objects.\n\nThe system creates special composite objects during processing to maintain\nhierarchical relationships, allowing easy access to both the current array item\nand its parent contexts.\n"},"sourceDataType":{"type":"string","description":"Declares the JSON type of the input array being iterated, to ensure\nproper type handling during array construction. Same enum as `dataType`.\n","enum":["string","number","boolean","object","stringarray","numberarray","booleanarray","objectarray","arrayarray"]},"default":{"type":"string","description":"Specifies a fallback value when the extracted array element is empty or\nnot found in the input data.\n"},"conditional":{"type":"object","description":"Defines conditional rules for including each array element in the result.\n","properties":{"when":{"type":"string","description":"Specifies the condition that must be met for an array element to be included.\n\n'extract_not_empty' only includes elements where the extract field returns a value.\n","enum":["extract_not_empty"]}}},"mappings":{"type":"array","description":"Contains recursive mapping definitions for complex array element transformations.\n\n**Composite object mechanism**\n\nWhen both 'extract' and 'mappings' are used together, the system implements a sophisticated\n\"composite object\" approach that is crucial for AI agents to understand:\n\n1. The system starts with the complete input record\n\n2. For each array element matched by the extract path, it creates a modified version of\n   the input record where:\n   - Array paths in the extract JSON path are REPLACED with single objects\n   - Each array ([]) in the path is converted to a single object ({})\n   - This preserves the hierarchical relationship between nested arrays\n\n**Example**\n\nGiven an input record:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": [\n      {\n        \"id\": \"O-001\",\n        \"items\": [\n          {\"sku\": \"ABC\", \"qty\": 2},\n          {\"sku\": \"XYZ\", \"qty\": 1}\n        ]\n      },\n      {\n        \"id\": \"O-002\",\n        \"items\": [\n          {\"sku\": \"DEF\", \"qty\": 3}\n        ]\n      }\n    ]\n  }\n}\n```\n\nWith extract path: `$.customer.orders[*].items[*]`\n\nFor each item, the system creates a composite object like:\n```json\n{\n  \"customer\": {\n    \"name\": \"John Doe\",\n    \"orders\": {  // Note: Array replaced with single object\n      \"id\": \"O-001\",\n      \"items\": {  // Note: Array replaced with single object\n        \"sku\": \"ABC\",\n        \"qty\": 2\n      }\n    }\n  }\n}\n```\n\nThen in your mappings, you can access:\n- The current item: `$.customer.orders.items.sku`\n- The parent order: `$.customer.orders.id`\n- Top-level data: `$.customer.name`\n\nThis approach allows for precise mapping from deeply nested structures while maintaining\naccess to all contextual parent data, without requiring complex array index management.\n\n**Implementation guidance**\n\nWhen implementing the composite object mechanism:\n\n1. Analyze the extract path to identify all array patterns (`[*]` or `[number]`)\n2. For each array in the path, understand that it will be replaced with a single object\n3. In the mappings, use paths that reference these arrays as if they were objects\n4. Remember that every mapping still has access to the full input record context\n5. This mechanism is especially powerful when mapping hierarchical data like:\n   - Order → Line Items → Taxes/Discounts\n   - Customer → Addresses → Address Lines\n   - Invoice → Line Items → Serial Numbers\n\nThe extract path effectively tells the system \"iterate through these arrays\",\nwhile the composite object mechanism ensures you can still access both the\ncurrent array item AND its parent context during mapping.\n","items":{"$ref":"#/components/schemas/items"}}}}},"status":{"type":"string","description":"**Purpose**\nREQUIRED on every mapping entry. Controls whether the mapping is active.\n\n**Guidance**\nAlways set to `\"Active\"`. The API rejects mappings without this field\n(validation error: \"Mapping object must have status field present.\").\n","enum":["Active"]},"conditional":{"type":"object","description":"**Purpose**\nDefines conditional processing rules for the entire mapping.\n\n**Guidance**\nThese conditions determine whether the mapping is applied based on record\nstate or field content:\n\n**When to Use**\n- When a mapping should only be applied in specific circumstances\n- To implement conditional logic without using complex handlebars expressions\n- For creating mappings that only run during create or update operations\n\n**Available Conditions**\n\n- `record_created`: Apply only when creating a new record\n  Useful for setting initial values that should not be overwritten during updates\n\n- `record_updated`: Apply only when updating an existing record\n  Useful for transformation logic that should only run during updates\n\n- `extract_not_empty`: Apply only when the extract field returns a value\n  Useful for conditional mapping based on input data availability\n\n**Example**\n```json\n{\n  \"generate\": \"statusMessage\",\n  \"dataType\": \"string\",\n  \"extract\": \"$.status.message\",\n  \"conditional\": {\n    \"when\": \"extract_not_empty\"  // Only map when status.message exists\n  }\n}\n```\n","properties":{"when":{"type":"string","description":"Specifies the condition that triggers application of this mapping:\n- record_created: Apply only when creating a new record\n- record_updated: Apply only when updating an existing record\n- extract_not_empty: Apply only when the extract field returns a value\n","enum":["record_created","record_updated","extract_not_empty"]}}}}},"Lookups":{"type":"array","description":"Configuration for value-to-value transformations using lookup tables.\n\n**Purpose**\n\nLookups provide a way to translate values from one system to another. They transform\ninput values into output values using either static mapping tables or\ndynamic lookup caches.\n\n**Lookup mechanisms**\n\nThere are two distinct lookup mechanisms available:\n\n1. **Static Lookups**: Define a simple key-value map object and store it as part of your resource\n   - Best for: Small, fixed sets of values that rarely change\n   - Implementation: Configure the `map` object with input-to-output value mappings\n   - Example: Country codes, status values, simple translations\n\n2. **Dynamic Lookups**: Reference an existing 'Lookup Cache' resource in your Celigo account\n   - Best for: Large datasets, frequently changing values, or complex reference data\n   - Implementation: Configure `_lookupCacheId` to reference cached data maintained independently\n   - Example: Product catalogs, customer databases, pricing information\n\n**Property usage**\n\nThere are two mutually exclusive ways to configure lookups, depending on which mechanism you choose:\n\n1. **For Static Mappings**: Configure the `map` property with a direct key-value object\n   ```json\n   \"map\": {\"US\": \"United States\", \"CA\": \"Canada\"}\n   ```\n\n2. **For Dynamic Lookups**: Configure the following properties:\n   - `_lookupCacheId`: Reference to the lookup cache resource\n   - `extract`: JSON path to extract specific value from the returned lookup object\n\n**When to use**\n\nLookups are ideal for:\n\n1. **Value Translation**: Mapping codes or IDs to human-readable values\n\n2. **Data Enrichment**: Adding related information to records during processing\n\n3. **Normalization**: Ensuring consistent formatting of values across systems\n\n**Implementation details**\n\nLookups can be referenced in:\n\n1. **Field Mappings**: Direct use in field transformation configurations\n\n2. **Handlebars Templates**: Use within templates with the syntax:\n   ```\n   {{lookup 'lookupName' record.fieldName}}\n   ```\n\n**Example usage**\n\n```json\n\"lookups\": [\n  {\n    \"name\": \"countryCodeToName\",\n    \"map\": {\n      \"US\": \"United States\",\n      \"CA\": \"Canada\",\n      \"UK\": \"United Kingdom\"\n    },\n    \"default\": \"Unknown Country\",\n    \"allowFailures\": true\n  },\n  {\n    \"name\": \"productDetails\",\n    \"_lookupCacheId\": \"60a2c4e6f321d800129a1a3c\",\n    \"extract\": \"$.details.price\",\n    \"allowFailures\": false\n  }\n]\n```\n","items":{"type":"object","properties":{"name":{"type":"string","description":"Unique identifier for the lookup table within this configuration.\n\nThis name must be unique within the scope where the lookup is defined and is used to reference\nthe lookup in handlebars templates with the syntax {{lookup 'name' value}}.\n\nChoose descriptive names that indicate the transformation purpose, such as:\n- \"countryCodeToName\" for country code to full name conversion\n- \"statusMapping\" for status code translations\n- \"departmentCodes\" for department code to name mapping\n"},"map":{"type":"object","description":"The lookup mapping table as key-value pairs.\n\nThis object contains the input values as keys and their corresponding\noutput values. When a input value matches a key in this object,\nit will be replaced with the corresponding value.\n\nThe map should be kept to a reasonable size (typically under 100 entries)\nfor optimal performance. For larger mapping requirements, consider using\ndynamic lookups instead.\n\nMaps can include:\n- Simple code to name conversions: {\"US\": \"United States\"}\n- Status transformations: {\"A\": \"Active\", \"I\": \"Inactive\"}\n- ID to name mappings: {\"100\": \"Marketing\", \"200\": \"Sales\"}\n\nValues can be strings, numbers, or booleans, but all are stored as strings\nin the configuration.\n"},"_lookupCacheId":{"type":"string","description":"Reference to a LookupCache resource that contains the reference data for the lookup.\n\n**Purpose**\n\nThis field connects the lookup to an external data source that has been cached in the system.\nUnlike static lookups that use the `map` property, dynamic lookups can reference large datasets\nor frequently changing information without requiring constant updates to the integration.\n\n**Implementation details**\n\nThe LookupCache resource referenced by this ID contains:\n- The data records to be used as a reference source\n- Configuration for how the data should be indexed and accessed\n- Caching parameters to balance performance with data freshness\n\n**Usage patterns**\n\nCommonly used to reference:\n- Product catalogs or SKU databases\n- Customer or account information\n- Pricing tables or discount rules\n- Complex business logic lookup tables\n\nFormat: 24-character hexadecimal string (MongoDB ObjectId)\n","format":"objectid"},"extract":{"type":"string","description":"JSON path expression that extracts a specific value from the cached lookup object.\n\n**Purpose**\n\nWhen using dynamic lookups with a LookupCache, this JSON path identifies which field to extract\nfrom the cached object after it has been retrieved using the lookup key.\n\n**Implementation details**\n\n- Must use JSON path syntax (similar to mapping extract fields)\n- Operates on the cached object returned by the lookup operation\n- Examples:\n  - \"$.name\" - Extract the name field from the top level\n  - \"$.details.price\" - Extract a nested price field\n  - \"$.attributes[0].value\" - Extract a value from the first element of an array\n\n**Usage scenario**\n\nWhen a lookup cache contains complex objects:\n```json\n// Cache entry for key \"PROD-123\":\n{\n  \"id\": \"PROD-123\",\n  \"name\": \"Premium Widget\",\n  \"details\": {\n    \"price\": 99.99,\n    \"currency\": \"USD\",\n    \"inStock\": true\n  }\n}\n```\n\nSetting extract to \"$.details.price\" would return 99.99 as the lookup result.\n\nIf no extract is provided, the entire cached object is returned as the lookup result.\n"},"default":{"type":"string","description":"Default value to use when the source value is not found in the lookup map.\n\nThis value is used as a fallback when:\n1. The source value doesn't match any key in the map\n2. allowFailures is set to true\n\nSetting an appropriate default helps prevent flow failures due to unexpected\nvalues and provides predictable behavior for edge cases.\n\nCommon default patterns include:\n- Descriptive unknowns: \"Unknown Country\", \"Unspecified Status\"\n- Original value indicators: \"{Original Value}\", \"No mapping found\"\n- Neutral values: \"Other\", \"N/A\", \"Miscellaneous\"\n\nIf allowFailures is false and no default is specified, the flow will fail\nwhen encountering unmapped values.\n"},"allowFailures":{"type":"boolean","description":"When true, missing lookup values will use the default value rather than causing an error.\n\n**Behavior control**\n\nThis field determines how the system handles source values that don't exist in the map:\n\n- true: Use the default value for missing mappings and continue processing\n- false: Treat missing mappings as errors, failing the record\n\n**Recommendation**\n\nSet this to true when:\n- New source values might appear over time\n- Data quality issues could introduce unexpected values\n- Processing should continue even with imperfect mapping\n\nSet this to false when:\n- Complete data accuracy is critical\n- All possible source values are known and controlled\n- Missing mappings indicate serious data problems that should be addressed\n\nThe best practice is typically to set allowFailures to true with a meaningful\ndefault value, so flows remain operational while alerting you to missing mappings.\n"}}}},"Form":{"type":"object","description":"Configuration for creating user-friendly settings forms that make it easier for less technical users\nto configure integration resources.\n\n**Settings form builder**\n\nThe Settings Form Builder allows you to create or edit user-friendly fields that prompt for text entry\nor selections that will be returned as settings applied to this resource. Your forms can include any\nfield types that you see elsewhere in integrator.io, such as:\n\n- Text fields\n- Dropdown selections\n- Checkboxes\n- Radio buttons\n- Date pickers\n- Multi-select fields\n- Search fields\n\nForm fields make it much easier for less technical users to work with your integration settings by:\n\n- Providing clear labels and help text\n- Enforcing validation rules\n- Offering pre-defined selection options\n- Grouping related settings logically\n- Supporting conditional visibility\n- Creating a consistent user experience\n","properties":{"form":{"type":"object","description":"Configuration that defines the structure, fields, and behavior of the settings form.\n\nThis object contains the complete definition of the form's layout, fields, validation rules,\nand interactive behaviors. The specific structure depends on the form complexity and can include\nfield definitions, sections, conditional display logic, and default values.\n\nThe form configuration is typically created and managed through the visual Form Builder interface\nrather than edited directly as JSON.\n","properties":{"fieldMap":{"type":"object","description":"A mapping of field identifiers to their configuration objects.\nEach key in this object represents a unique field ID, and the value contains\nall the configuration settings for that specific form field.\n","additionalProperties":{"type":"object","description":"Configuration for an individual form field.\n","properties":{"id":{"type":"string","description":"Unique identifier for this field within the form.\nThis value typically matches the key in the fieldMap object.\n"},"name":{"type":"string","description":"Name of the field, used as the property name when generating the settings object\nfrom the submitted form data.\n"},"type":{"type":"string","description":"The type of form control to render for this field.\n","enum":["text","checkbox","radiogroup","relativeuri","editor","keyvalue","select","multiselect","toggle","datetime","date"]},"label":{"type":"string","description":"Display label shown next to the field in the form.\n"},"description":{"type":"string","description":"Detailed explanation text that appears below the field, providing more context\nthan the label or helpText.\n"},"helpText":{"type":"string","description":"Explanatory text that appears when hovering over the help icon next to the field.\nUsed to provide additional guidance on how to use the field.\n"},"required":{"type":"boolean","description":"When true, the field must have a value before the form can be submitted.\n","default":false},"multiline":{"type":"boolean","description":"For text fields, determines whether the input should be a multi-line text area\ninstead of a single-line input.\n","default":false},"rowsMax":{"type":"integer","description":"For multiline text fields, specifies the maximum number of visible rows.\n"},"inputType":{"type":"string","description":"For text fields, specifies the HTML input type attribute to apply additional\nvalidation or specialized input behavior.\n","enum":["text","number","email","password","tel","url"]},"delimiter":{"type":"string","description":"For text fields, specifies a character to use for splitting the input into an array.\nUsed for collecting multiple values in a single text field.\n"},"mode":{"type":"string","description":"For editor fields, specifies the type of content being edited for syntax highlighting.\n","enum":["json","xml","csv","text"]},"keyName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the key input.\n"},"valueName":{"type":"string","description":"For keyvalue fields, specifies the placeholder and field name for the value input.\n"},"showDelete":{"type":"boolean","description":"For keyvalue fields, determines whether to show a delete button for each key-value pair.\n"},"doNotAllowFutureDates":{"type":"boolean","description":"For date and datetime fields, restricts selection to dates not in the future.\n"},"skipTimezoneConversion":{"type":"boolean","description":"For datetime fields, prevents automatic timezone conversion of the date value.\n"},"options":{"type":"array","description":"For fields that present choices (select, multiselect, radiogroup, toggle), defines\nthe available options.\n","items":{"oneOf":[{"title":"Option group","type":"object","properties":{"items":{"type":"array","items":{"oneOf":[{"title":"String value","type":"string"},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]},"description":"Array of option values/labels to display in the selection control.\n"}}},{"title":"Label-value pair","type":"object","properties":{"label":{"type":"string","description":"Display text for the option.\n"},"value":{"type":"string","description":"Value to store when this option is selected.\n"}}}]}},"visibleWhen":{"type":"array","description":"Conditional display rules that determine when this field should be visible.\nIf empty or not provided, the field is always visible.\n","items":{"type":"object","properties":{"field":{"type":"string","description":"The ID of another field whose value controls the visibility of this field.\n"},"is":{"type":"array","items":{"type":"string"},"description":"Array of values - if the referenced field has any of these values,\nthis field will be visible.\n"}}}}}}},"layout":{"type":"object","description":"Defines how the form fields are arranged and grouped in the UI.\nThe layout can organize fields into columns, sections, or other visual groupings.\n","properties":{"type":{"type":"string","description":"The type of layout to use for the form.\n","enum":["column"]},"containers":{"type":"array","description":"Array of container objects that group fields or contain nested containers.\nEach container can represent a column, box, indented section, or collapsible section.\n","items":{"type":"object","properties":{"type":{"type":"string","description":"The visual style of the container.\n","enum":["indent","box","collapse"]},"label":{"type":"string","description":"The heading text displayed for this container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this container.\nEach ID must correspond to a key in the fieldMap object.\n"},"containers":{"type":"array","description":"Nested containers within this container. Allows for hierarchical organization\nof fields with different visual styles.\n","items":{"type":"object","properties":{"label":{"type":"string","description":"The heading text displayed for this nested container.\n"},"fields":{"type":"array","items":{"type":"string"},"description":"Array of field IDs that should be displayed in this nested container.\n"}}}}}}}}}},"additionalProperties":true},"init":{"type":"object","description":"Configuration for custom JavaScript initialization that executes when the form is first loaded.\n\nThis object defines a JavaScript hook that prepares the form for use, sets initial field values,\nperforms validation, or otherwise customizes the form behavior before it is displayed to the user.\n\n**Function signature**\n\nThe initialization function is invoked with a single 'options' argument containing contextual information:\n```javascript\nfunction formInit(options) {\n  // Process options and return the form object\n  return options.resource.settingsForm.form;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.resource` - The current resource being configured\n- `options.parentResource` - The parent of the current resource\n- `options.grandparentResource` - The grandparent of the current resource\n- `options.license` - For integration apps, the license provisioned to the integration\n- `options.parentLicense` - For integration apps, the parent of the license\n\n\n**Common uses**\n\n- Dynamically generate field options based on resource configuration\n- Pre-populate default values from related resources\n- Apply conditional logic that depends on resource properties\n- Add, remove, or modify form fields based on user permissions or account settings\n- Fetch external data to populate selection options\n- Implement complex validation rules that depend on resource context\n- Create branching form experiences based on user selections\n\n**Return value**\n\nThe function must return a valid form object that the UI can render.\nThrowing an exception will signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called when the form\nis initialized and should handle any custom setup logic.\n\nThe function must follow the expected signature and return a valid form object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the initialization function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}}}},"PreSave":{"type":"object","description":"Defines a JavaScript hook that executes before the resource is saved.\n\nThis hook allows for programmatic validation, transformation, or enrichment of the\nresource itself before it is persisted. It can be used to enforce business rules,\nset derived properties, or implement cross-field validations that can't be expressed\nthrough the standard UI.\n\n**Function signature**\n\nThe preSave function is invoked with a single 'options' argument containing:\n```javascript\nfunction preSave(options) {\n  // Process options and return the modified resource\n  return options.newResource;\n}\n```\n\n**Available context**\n\nThe 'options' argument provides access to:\n- `options.newResource` - The resource being saved (with pending changes)\n- `options.oldResource` - The previous version of the resource (before changes)\n\n\n**Common uses**\n\n- Enforcing complex business rules across multiple fields\n- Automatically deriving field values based on other configuration\n- Performing validation that depends on external systems or data\n- Normalizing or standardizing configuration values\n- Adding computed or derived properties\n- Implementing versioning or change tracking\n- Dynamically looking up data using the Celigo API module to enrich configuration\n\n**Return value**\n\nThe function must return the newResource object (potentially modified) to be saved.\nThrowing an exception will prevent saving and signal an error to the user.\n","properties":{"function":{"type":"string","description":"The name of the function to execute within the referenced script.\n\nThis property specifies which function to invoke from the script\nreferenced by _scriptId. The function will be called just before\nthe resource is saved.\n\nThe function must follow the expected signature and return the resource object.\n"},"_scriptId":{"type":"string","description":"Reference to a predefined script resource containing the preSave function.\n\nThe referenced script should contain the function specified in the\n'function' property. This script must be accessible within the user's account\nand have appropriate permissions.\n"}}},"Settings":{"type":"object","description":"Configuration settings that can be accessed by hooks, filters, mappings and handlebars templates at runtime.\n\nIt enables customization of the resource's logic, allowing hooks, mappings, filters, and\nhandlebars to access and apply the settings at runtime.\n\n**Usage**\n\nThe settings object can store arbitrary JSON data that you want to save with the resource.\nWhile it's often populated through a form defined in the `settingsForm` field, you can also:\n\n- Directly provide JSON settings without using a form\n- Store configuration values used by hooks and templates\n- Create resource-specific constants and parameters\n- Maintain lookup tables or mapping structures\n- Define conditional logic parameters\n\n**Accessibility**\n\nSettings are available in:\n- All handlebars fields for building dynamic payloads\n- Field mapping expressions\n- JavaScript hooks via the options object\n- Filters and transformations\n\n**Best practices**\n\nFor non-technical users, create a custom form instead of editing the JSON directly.\nThis provides a user-friendly interface for updating settings without requiring JSON knowledge.\n","additionalProperties":true},"ResourceResponse":{"type":"object","description":"Core response fields shared by all Celigo resources","properties":{"_id":{"type":"string","format":"objectId","readOnly":true,"description":"Unique identifier for the resource.\n\nThe _id is used in:\n- API endpoints that operate on a specific resource (e.g., GET, PUT, DELETE)\n- References from other resources (e.g., flows that use this resource)\n- Job history and error tracking\n\nFormat: 24-character hexadecimal string\n"},"createdAt":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was initially created.\n\nThis read-only field is automatically set during resource creation and cannot\nbe modified. It provides an audit trail for when the resource was first added\nto the system, which can be useful for:\n\n- Resource lifecycle management\n- Audit and compliance reporting\n- Troubleshooting integration timelines\n- Identifying older resources that may need review\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"},"lastModified":{"type":"string","format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was most recently updated.\n\nThis read-only field is automatically updated whenever any property of the\nresource is modified. It provides an audit trail that can be used for:\n\n- Determining if a resource has changed since it was last reviewed\n- Monitoring configuration changes during troubleshooting\n- Implementing cache invalidation strategies\n- Synchronizing related resources based on modification time\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix)\nand will always be equal to or later than the createdAt timestamp.\n"},"deletedAt":{"type":["string","null"],"format":"date-time","readOnly":true,"description":"Timestamp indicating when the resource was marked for deletion.\n\nWhen this field is present and contains a valid timestamp, it indicates\nthat the resource has been soft-deleted (moved to the recycle bin) but not\nyet permanently removed from the system. This allows for recovery of\naccidentally deleted resources within a specified retention period.\n\nThe deletedAt timestamp enables:\n- Filtering deleted resources from active resource listings\n- Implementing time-based retention policies for permanent deletion\n- Tracking deletion events for audit and compliance purposes\n- Resource recovery workflows with clear timeframes\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\nWhen null or absent, the resource is considered active.\n"}},"required":["_id"]},"IAResourceResponse":{"type":"object","description":"Integration app response fields for resources that are part of integration apps","properties":{"_integrationId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the specific integration instance that contains this resource.\n\nThis field is only populated for resources that are part of an integration app\ninstallation. It contains the unique identifier (_id) of the integration\nresource that was installed in the account.\n\nThe integration instance represents a specific installed instance of an\nintegration app, with its own configuration, settings, and runtime environment.\n\nThis reference enables:\n- Tracing the resource back to its parent integration instance\n- Permission and access control based on integration ownership\n- Lifecycle management (enabling/disabling, updating, or uninstalling)\n"},"_connectorId":{"type":"string","format":"objectId","readOnly":true,"description":"Reference to the integration app that defines this resource.\n\nThis field is only populated for resources that are part of an integration app.\nIt contains the unique identifier (_id) of the integration app (connector)\nthat defines the structure, behavior, and templates for this resource.\n\nThe integration app is the published template that can be installed\nmultiple times across different accounts, with each installation creating\na separate integration instance (referenced by _integrationId).\n\nThis reference enables:\n- Identifying the source integration app for this resource\n- Determining which template version is being used\n- Linking to documentation, support, and marketplace information\n"}}},"AIDescription":{"type":"object","description":"AI-generated descriptions and documentation for the resource.\n\nThis object contains automatically generated content that helps users\nunderstand the purpose, behavior, and configuration of the resource without\nrequiring them to analyze the technical details. The AI-generated content\nis sanitized and safe for display in the UI.\n","properties":{"summary":{"type":"string","description":"Brief AI-generated summary of the resource's purpose and functionality.\n\nThis concise description provides a quick overview of what the resource does,\nwhat systems it interacts with, and its primary role in the integration.\nThe summary is suitable for display in list views, dashboards, and other\ncontexts where space is limited.\n\nMaximum length: 10KB\n"},"detailed":{"type":"string","description":"Comprehensive AI-generated description of the resource's functionality.\n\nThis detailed explanation covers the resource's purpose, configuration details,\ndata flow patterns, filtering logic, and other technical aspects. It provides\nin-depth information suitable for documentation, tooltips, or detailed views\nin the administration interface.\n\nThe content may include HTML formatting for improved readability.\n\nMaximum length: 10KB\n"},"generatedOn":{"type":"string","format":"date-time","description":"Timestamp indicating when the AI description was generated.\n\nThis field helps track the freshness of the AI-generated content and\ndetermine when it might need to be regenerated due to changes in the\nresource's configuration or behavior.\n\nThe timestamp is recorded in ISO 8601 format with UTC timezone (Z suffix).\n"}}},"APIM":{"type":"array","description":"Read-only field that stores information about the integration resources\npublished in the API Management (APIM) platform.\n\nThis field tracks the relationship between integrator.io resources and their\npublished counterparts in the Gravitee API Management platform, which is\ntightly integrated with the Celigo UI. When resources are \"pushed\" to Gravitee,\nthis field is populated with the relevant identifiers and statuses.\n","items":{"type":"object","properties":{"apiId":{"type":"string","description":"Identifier for the API where this integrator.io resource is published in the APIM.\n\nThis is a Gravitee resource identifier (not prefixed with underscore like Celigo IDs)\nthat uniquely identifies the API in the API Management platform.\n"},"flowId":{"type":"string","description":"Identifier for the flow within the API where this integrator.io resource is linked.\n\nWhen an API has multiple integrator.io resources linked, each resource is associated\nwith a specific flow in the API, identified by this field. This is a Gravitee\nresource identifier.\n"},"status":{"type":"string","description":"Indicates the publishing stage of the integrator.io resource in APIM.\n\nPossible values:\n- 'oaspending': The resource is published but the OpenAPI Specification (OAS) is not\n  yet published. The apiId will be updated with the API ID created in APIM.\n- 'published': The OpenAPI Specification for the integrator.io resource has been\n  successfully uploaded to APIM.\n","enum":["oaspending","published"]}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/imports/{_id}/clone":{"post":{"summary":"Clone an import","description":"Creates a copy of an existing import.\nSupports optionally remapping referenced connections (via connectionMap).\n","operationId":"cloneImport","tags":["Imports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the import to clone","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":false,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/CloneRequest"}}}},"responses":{"200":{"description":"Import cloned successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/CloneResponse"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
````

## Preview cloning an import

> Returns a preview of the resources that would be created by cloning the specified import.\
> The response includes the target import and any transitive dependencies (e.g. connections, scripts).\
> No resources are created by this endpoint.<br>

```json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"ClonePreviewResponse":{"type":"object","description":"Preview of the resources that would be created by a clone operation.\nEach object in the `objects` array represents a resource that will be\ncloned, including the target resource and all transitive dependencies\n(connections, scripts, exports, imports, etc.).\n","properties":{"objects":{"type":"array","description":"List of resources that would be created by the clone. Always includes\nthe target resource and may include transitive dependencies such as\nconnections, scripts, exports, imports, async helpers, and lookup caches.\n","items":{"type":"object","properties":{"model":{"type":"string","description":"The model type of the resource. Observed values include\nAsyncHelper, Connection, Export, Flow, Import, Integration,\nLookupCache, and Script.\n"},"doc":{"type":"object","description":"The full resource document that would be created by the clone.","additionalProperties":true}},"required":["model","doc"]}},"stackRequired":{"type":"boolean","description":"Whether the clone requires a stack (connector-level) environment to proceed."},"_stackId":{"type":["string","null"],"description":"The stack id associated with the resource, or null if no stack is involved."}},"required":["objects","stackRequired","_stackId"]},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/imports/{_id}/clone/preview":{"get":{"summary":"Preview cloning an import","description":"Returns a preview of the resources that would be created by cloning the specified import.\nThe response includes the target import and any transitive dependencies (e.g. connections, scripts).\nNo resources are created by this endpoint.\n","operationId":"previewCloneImport","tags":["Imports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the import to preview cloning","required":true,"schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Clone preview retrieved successfully","content":{"application/json":{"schema":{"$ref":"#/components/schemas/ClonePreviewResponse"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
```

## Replace connection on import for a branched flow

> Replaces the connection used by an import in a flow and cancels any related running jobs.\
> This is useful when migrating flows between environments or updating to newer connection versions.<br>

```json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}},"schemas":{"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}}},"paths":{"/v1/imports/{_id}/replaceConnection":{"put":{"summary":"Replace connection on import for a branched flow","description":"Replaces the connection used by an import in a flow and cancels any related running jobs.\nThis is useful when migrating flows between environments or updating to newer connection versions.\n","operationId":"replaceConnectionOnImport","tags":["Imports"],"parameters":[{"name":"_id","in":"path","description":"The unique identifier of the import","required":true,"schema":{"type":"string","format":"objectId"}}],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"_newConnectionId":{"type":"string","description":"The id of the new connection to be used"}},"required":["_newConnectionId"]}}}},"responses":{"204":{"description":"Successfully replaced connection on import"},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"}}}}}}
```

## Invoke an import with data and return per-record results

> Runs an existing import against the destination system with the supplied\
> data records and returns per-record results synchronously.\
> \
> The request body should contain a \`data\` array of records to import. Each\
> record is processed through the import's mappings, transformations, and\
> hooks before being sent to the destination.\
> \
> The response is an \*\*array\*\* of per-record result objects, each containing\
> a \`statusCode\`, the transformed \`\_json\` payload, and any \`errors\`\
> encountered during processing.\
> \
> AI guidance:\
> \- This endpoint \*\*actually writes\*\* to the destination system — there is\
> &#x20; no dry-run mode on invoke. Use \`POST /v1/imports/preview\` to test\
> &#x20; mappings without writing.\
> \- POST-only; GET on this path returns 404.\
> \- The 404 error shape for an invalid import ID is\
> &#x20; \`{"errors": {"code": "invalid\_ref", "message": "Import not found."}}\` —\
> &#x20; note the non-standard singular \`errors\` object (not an array).\
> \- A 200 response with errors inside individual result objects means some\
> &#x20; records failed — inspect each element's \`errors\` array.

```json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"422-unprocessable-entity":{"description":"Unprocessable entity. The request was well-formed but was unable to be followed due to semantic errors.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}},"schemas":{"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}}},"paths":{"/v1/imports/{_id}/invoke":{"post":{"operationId":"invokeImport","tags":["Imports"],"summary":"Invoke an import with data and return per-record results","description":"Runs an existing import against the destination system with the supplied\ndata records and returns per-record results synchronously.\n\nThe request body should contain a `data` array of records to import. Each\nrecord is processed through the import's mappings, transformations, and\nhooks before being sent to the destination.\n\nThe response is an **array** of per-record result objects, each containing\na `statusCode`, the transformed `_json` payload, and any `errors`\nencountered during processing.\n\nAI guidance:\n- This endpoint **actually writes** to the destination system — there is\n  no dry-run mode on invoke. Use `POST /v1/imports/preview` to test\n  mappings without writing.\n- POST-only; GET on this path returns 404.\n- The 404 error shape for an invalid import ID is\n  `{\"errors\": {\"code\": \"invalid_ref\", \"message\": \"Import not found.\"}}` —\n  note the non-standard singular `errors` object (not an array).\n- A 200 response with errors inside individual result objects means some\n  records failed — inspect each element's `errors` array.","parameters":[{"in":"path","name":"_id","required":true,"schema":{"type":"string","format":"objectId"},"description":"Import ID"}],"requestBody":{"required":false,"content":{"application/json":{"schema":{"type":"object","description":"Records to import","properties":{"data":{"type":"array","description":"Array of records to send to the destination system","items":{"type":"object","additionalProperties":true}}},"additionalProperties":true}}}},"responses":{"200":{"description":"Import completed. Returns a per-record result array. Each element\ncontains the `statusCode` from the destination, the transformed\n`_json` payload, and any `errors` encountered.","content":{"application/json":{"schema":{"type":"array","items":{"type":"object","properties":{"statusCode":{"type":"integer","description":"HTTP status code from the destination system"},"_json":{"type":"object","description":"The transformed record as sent to the destination","additionalProperties":true},"errors":{"type":"array","description":"Errors encountered processing this record","items":{"type":"object","properties":{"source":{"type":"string"},"code":{"type":"string"},"message":{"type":"string"},"resolved":{"type":"boolean"},"occurredAt":{"type":"integer","format":"int64"},"stage":{"type":"string"},"classification":{"type":"string"}}}}}}}}}},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"description":"Import not found","content":{"application/json":{"schema":{"type":"object","properties":{"errors":{"type":"object","properties":{"code":{"type":"string"},"message":{"type":"string"}}}}}}}},"422":{"$ref":"#/components/responses/422-unprocessable-entity"}}}}}}
```

## Preview the output of an import doc (no job created)

> Runs an import doc through the flow engine's preview pipeline against\
> supplied sample data and returns the per-stage output, without writing\
> anything to the destination system. \*\*No Job record is created\*\* and no\
> flow-level state is updated — this is a stateless preview.\
> \
> The UI's connection/editor "Preview" actions call this path, as does\
> \`POST /v1/imports/preview\` from the import editor. It is the unscoped\
> counterpart to\
> \`POST /v1/integrations/{\_integrationId}/flows/{\_flowId}/imports/preview\`\
> — prefer this variant when previewing a standalone import that is not\
> yet associated with a flow.\
> \
> AI guidance:\
> \- Stage names vary by \`adaptorType\`. For AI Agent imports expect\
> &#x20; \`request\` → \`raw\` → \`parse\`; for HTTP imports expect \`request\` →\
> &#x20; \`response\` (plus any hook stages).\
> \- Stage-level errors surface inside the 200 envelope in\
> &#x20; \`stages\[].errors\` and the top-level \`errors\[]\`; only structural body\
> &#x20; validation returns 4xx.\
> \- Supply \`data: \[{...sample source record...}]\` for the preview to run\
> &#x20; mappings over something concrete. Passing \`data: \[{}]\` runs the\
> &#x20; pipeline with an empty record — useful for AI Agent imports where\
> &#x20; the prompt template provides the content.

```json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"ImportPreviewResponse":{"type":"object","description":"Envelope returned by `POST /v1/imports/preview` (and the scoped\n`/v1/integrations/{_integrationId}/flows/{_flowId}/imports/preview`\nvariant). Carries per-stage diagnostics alongside the sampled records\nproduced by running the supplied source data through the import's\nmapping/transform/target pipeline without writing to the destination.","properties":{"data":{"type":"array","description":"Final sampled records emitted by the preview — the output of the last\npipeline stage. Mirrors `stages[-1].data` and is exposed here as a\nconvenience for callers that only want the end result.","items":{}},"stages":{"type":"array","description":"Ordered list of pipeline stages the preview traversed. **Absent** when\nthe request body's `data[]` was empty (minimal response is\n`{data:[null]}` with no `stages`). Stage names are adaptor-dependent:\n- **AI Agent imports** produce `request` → `raw` → `parse` (the\n  request sent to the model, the raw model response, and the parsed\n  output).\n- **HTTP imports** typically produce `request` → `response` → hook\n  stages such as `postResponseHook`.\n- **Transform-only stages** (e.g. Mapper 2.0) appear between the\n  source-shaping stages and the final `request` stage.\n\nEach stage carries its output `data[]` and any `errors[]`/`null`\nraised at that stage. Stage-level errors do NOT fail the overall\ncall — they surface here and the envelope still returns 200.","items":{"type":"object","description":"One pipeline stage's diagnostic and output envelope.","properties":{"name":{"type":"string","description":"Stage identifier (e.g. `request`, `raw`, `parse`, `postResponseHook`)."},"data":{"description":"Stage output. Shape varies by stage and adaptor — `request`\nstages typically carry the prepared request payload, `raw`\nstages carry the unparsed vendor response, `parse` stages\ncarry the structured output.","oneOf":[{"title":"Array","type":"array","items":{}},{"title":"Object","type":"object","additionalProperties":true},{"title":"Null","type":"null"}]},"errors":{"type":["array","null"],"description":"Errors raised at this stage, or `null` when clean.","items":{"type":"object","additionalProperties":true}}}}},"errors":{"type":"array","description":"Top-level error aggregate — typically mirrors non-null\n`stages[].errors[]` entries. Often omitted entirely when the preview\nran clean; callers should treat `undefined` and `[]` as equivalent.","items":{"type":"object","additionalProperties":true}}}},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}}}},"paths":{"/v1/imports/preview":{"post":{"operationId":"previewImport","tags":["Imports"],"summary":"Preview the output of an import doc (no job created)","description":"Runs an import doc through the flow engine's preview pipeline against\nsupplied sample data and returns the per-stage output, without writing\nanything to the destination system. **No Job record is created** and no\nflow-level state is updated — this is a stateless preview.\n\nThe UI's connection/editor \"Preview\" actions call this path, as does\n`POST /v1/imports/preview` from the import editor. It is the unscoped\ncounterpart to\n`POST /v1/integrations/{_integrationId}/flows/{_flowId}/imports/preview`\n— prefer this variant when previewing a standalone import that is not\nyet associated with a flow.\n\nAI guidance:\n- Stage names vary by `adaptorType`. For AI Agent imports expect\n  `request` → `raw` → `parse`; for HTTP imports expect `request` →\n  `response` (plus any hook stages).\n- Stage-level errors surface inside the 200 envelope in\n  `stages[].errors` and the top-level `errors[]`; only structural body\n  validation returns 4xx.\n- Supply `data: [{...sample source record...}]` for the preview to run\n  mappings over something concrete. Passing `data: [{}]` runs the\n  pipeline with an empty record — useful for AI Agent imports where\n  the prompt template provides the content.","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","description":"Envelope containing the import document to preview plus the\nsample records to feed it.","properties":{"import":{"type":"object","description":"Full import document (mirror of `POST /v1/imports` body\nshape), optionally carrying an `_id` when previewing an\nalready-saved import. All adaptor-specific fields\n(`aiAgent`, `http`, `rdbms`, ...) apply exactly as they\nwould on a live import.","additionalProperties":true},"data":{"type":"array","description":"Sample source records to run through the import pipeline.\nPass `[{}]` for adaptors that don't require an input record\n(e.g. AI Agent imports whose prompt generates the payload).\nAn **empty array** (`[]`) is accepted but produces a minimal\n`{data:[null]}` response with no `stages`. **Omitting `data`\nentirely returns HTTP 204** — the engine short-circuits when\nit has no records to trace.","items":{"type":"object","additionalProperties":true}}},"required":["import"]}}}},"responses":{"200":{"description":"Preview executed. Stage-level errors in the import config surface in\n`stages[].errors` and the top-level `errors[]`; inspect those before\ntrusting `data[]`. When `data:[]` was supplied, the response is a\nminimal `{data:[null]}` with no `stages` block.","content":{"application/json":{"schema":{"$ref":"#/components/schemas/ImportPreviewResponse"}}}},"204":{"description":"No content — returned when the request body omits the `data` key\nentirely (the engine has nothing to trace). Supply `data:[{}]` or\n`data:[...]` to get a populated preview envelope."},"401":{"$ref":"#/components/responses/401-unauthorized"},"422":{"description":"Request body is malformed — most commonly the top-level `import`\nobject is missing. Structural validation returns the standard\nerrors envelope.","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}}}}}
```

## Preview import data

> Preview how data would be imported, including field mappings, transformations, and\
> the final data structure that would be sent to the destination system.\
> \
> This is the unscoped endpoint — the import document is passed in the request body.<br>

```json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"PreviewRequest":{"type":"object","description":"Request body for previewing how data would be transformed by an import without actually writing to the destination system.\n","properties":{"_importId":{"type":"string","format":"objectId","description":"The import to preview. Required when previewing within a flow."},"sampleData":{"description":"Sample source record(s) to run through the import's mappings and transformations.\nMay be a single object or an array of objects depending on the adaptor.\n","oneOf":[{"title":"Single record","type":"object","additionalProperties":true},{"title":"Array of records","type":"array","items":{"type":"object","additionalProperties":true}}]},"options":{"type":"object","description":"Optional overrides applied only for this preview.","additionalProperties":true}},"additionalProperties":true},"PreviewResponse":{"type":"object","description":"Result of running a preview against the configured import. Shows the mapped/transformed records that would be sent to the destination.\n","properties":{"data":{"type":"array","description":"Mapped and transformed records produced by the import.","items":{"type":"object","additionalProperties":true}},"errors":{"type":"array","description":"Any validation or transformation errors encountered while building the preview.","items":{"type":"object","properties":{"code":{"type":"string"},"message":{"type":"string"},"path":{"type":"string"}}}}},"additionalProperties":true},"Error":{"type":"object","description":"Standard error response envelope returned by integrator.io APIs.","properties":{"errors":{"type":"array","description":"List of errors that occurred while processing the request.","items":{"type":"object","properties":{"code":{"oneOf":[{"type":"string"},{"type":"integer"}],"description":"Machine-readable error code. Usually a string like\n`invalid_ref`, `missing_required_field`, or `unauthorized`;\nmay be an **integer** when the error mirrors an upstream HTTP\nstatus (e.g. `500`) — most commonly returned by connection-ping\nand adaptor-proxy responses."},"message":{"type":"string","description":"Human-readable description of the error."},"field":{"type":"string","description":"Optional pointer to the document field that caused the error.\nUsed by structural validation errors (`missing_required_field`,\n`invalid_ref`) to indicate which field is at fault\n(e.g. `_id`, `type`, `http.baseURI`)."},"source":{"type":"string","description":"Optional origin layer for the error — e.g. `application` when\nthe error came from the remote system the adaptor called,\n`connector` when the adaptor itself rejected the request."}},"required":["message"]}}},"required":["errors"]}},"responses":{"400-bad-request":{"description":"Bad request. The server could not understand the request because of malformed syntax or invalid parameters.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}},"404-not-found":{"description":"Not found. The requested resource does not exist or is not visible to the caller.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}},"422-unprocessable-entity":{"description":"Unprocessable entity. The request was well-formed but was unable to be followed due to semantic errors.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}}}}},"paths":{"/v1/integrations/{_integrationId}/flows/{_flowId}/imports/preview":{"post":{"summary":"Preview import data","description":"Preview how data would be imported, including field mappings, transformations, and\nthe final data structure that would be sent to the destination system.\n\nThis is the unscoped endpoint — the import document is passed in the request body.\n","operationId":"previewImportData","tags":["Imports"],"parameters":[{"in":"path","name":"_integrationId","required":true,"schema":{"type":"string","format":"objectId"},"description":"Integration ID"},{"in":"path","name":"_flowId","required":true,"schema":{"type":"string","format":"objectId"},"description":"Flow ID"}],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/PreviewRequest"}}}},"responses":{"200":{"description":"Successfully previewed import data","content":{"application/json":{"schema":{"$ref":"#/components/schemas/PreviewResponse"}}}},"400":{"$ref":"#/components/responses/400-bad-request"},"401":{"$ref":"#/components/responses/401-unauthorized"},"404":{"$ref":"#/components/responses/404-not-found"},"422":{"$ref":"#/components/responses/422-unprocessable-entity"}}}}}}
```

## List dependencies of an import

> Returns the set of resources that depend on the specified resource.\
> The response is an object whose keys are dependent-resource types\
> (e.g. \`flows\`, \`imports\`) and whose values are arrays of dependency\
> entries.\
> \
> AI guidance:\
> \- An empty object \`{}\` means no other resources depend on the target.\
> &#x20; This is also returned for a well-formatted but nonexistent id.

```json
{"openapi":"3.1.0","info":{"title":"Imports","version":"1.0.0"},"servers":[{"url":"https://api.integrator.io","description":"Production (US / default region)"},{"url":"https://api.eu.integrator.io","description":"Production (EU region)"}],"security":[{"bearerAuth":[]}],"components":{"securitySchemes":{"bearerAuth":{"type":"http","scheme":"bearer"}},"schemas":{"DependencyResponse":{"type":"object","description":"Map of dependent-resource types to arrays of dependency entries.\nKeys are plural resource type strings (e.g. `flows`, `imports`,\n`connections`). An empty object `{}` means no dependents.\n","additionalProperties":{"type":"array","items":{"$ref":"#/components/schemas/DependencyEntry"}}},"DependencyEntry":{"type":"object","description":"A single resource that depends on the queried resource.","properties":{"id":{"type":"string","description":"Unique identifier of the dependent resource."},"name":{"type":"string","description":"Display name of the dependent resource."},"paths":{"type":"array","description":"JSON-path-style pointers within the dependent resource's document\nthat reference the target resource.\n","items":{"type":"string"}},"accessLevel":{"type":"string","description":"The caller's access level on the dependent resource."},"dependencyIds":{"type":"object","description":"Map of resource types to arrays of ids that this dependent\nresource references on the target. Keys are singular or plural\nresource type strings; values are arrays of id strings.\n","additionalProperties":{"type":"array","items":{"type":"string"}}}},"required":["id","name","paths","accessLevel","dependencyIds"]}},"responses":{"401-unauthorized":{"description":"Unauthorized. The request lacks a valid bearer token, or the provided token\nfailed to authenticate.\n\nNote: the 401 response is produced by the auth middleware **before** the\nrequest reaches the endpoint handler, so it does **not** follow the\nstandard `{errors: [...]}` envelope. Instead the body is a bare\n`{message: string}` object with no `code`, no `errors` array. Callers\nhandling 401s should key off the HTTP status and the `message` string,\nnot try to destructure an `errors[]`.","content":{"application/json":{"schema":{"type":"object","properties":{"message":{"type":"string","description":"Human-readable description of the auth failure. Known values:\n- `\"Unauthorized\"` — no `Authorization` header on the request.\n- `\"Bearer Authentication Failed\"` — header present but token\n  is invalid, revoked, or expired."}},"required":["message"]}}}}}},"paths":{"/v1/imports/{_id}/dependencies":{"get":{"operationId":"listImportDependencies","tags":["Imports"],"summary":"List dependencies of an import","description":"Returns the set of resources that depend on the specified resource.\nThe response is an object whose keys are dependent-resource types\n(e.g. `flows`, `imports`) and whose values are arrays of dependency\nentries.\n\nAI guidance:\n- An empty object `{}` means no other resources depend on the target.\n  This is also returned for a well-formatted but nonexistent id.","parameters":[{"name":"_id","in":"path","required":true,"description":"Resource ID.","schema":{"type":"string","format":"objectId"}}],"responses":{"200":{"description":"Dependency map. Keys are resource-type strings; values are arrays\nof dependency entries. Returns `{}` when no dependents exist.\n","content":{"application/json":{"schema":{"$ref":"#/components/schemas/DependencyResponse"}}}},"401":{"$ref":"#/components/responses/401-unauthorized"}}}}}}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://developer.celigo.com/api/api-reference/imports.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
