Interface DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder

  • All Superinterfaces:
    org.apache.camel.builder.EndpointConsumerBuilder, org.apache.camel.EndpointConsumerResolver
    Enclosing interface:
    DebeziumSqlserverEndpointBuilderFactory

    public static interface DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder
    extends org.apache.camel.builder.EndpointConsumerBuilder
    Builder for endpoint for the Debezium SQL Server Connector component.
    • Method Detail

      • additionalProperties

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder additionalProperties​(String key,
                                                                                                              Object value)
        Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro. The option is a: java.util.Map<java.lang.String, java.lang.Object> type. The option is multivalued, and you can use the additionalProperties(String, Object) method to add a value (call the method multiple times to set more values). Group: common
      • additionalProperties

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder additionalProperties​(Map values)
        Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro. The option is a: java.util.Map<java.lang.String, java.lang.Object> type. The option is multivalued, and you can use the additionalProperties(String, Object) method to add a value (call the method multiple times to set more values). Group: common
      • bridgeErrorHandler

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder bridgeErrorHandler​(boolean bridgeErrorHandler)
        Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. The option is a: boolean type. Default: false Group: consumer
      • bridgeErrorHandler

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder bridgeErrorHandler​(String bridgeErrorHandler)
        Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. The option will be converted to a boolean type. Default: false Group: consumer
      • internalKeyConverter

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder internalKeyConverter​(String internalKeyConverter)
        The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter. The option is a: java.lang.String type. Default: org.apache.kafka.connect.json.JsonConverter Group: consumer
      • internalValueConverter

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder internalValueConverter​(String internalValueConverter)
        The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter. The option is a: java.lang.String type. Default: org.apache.kafka.connect.json.JsonConverter Group: consumer
      • offsetCommitPolicy

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder offsetCommitPolicy​(String offsetCommitPolicy)
        The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals. The option is a: java.lang.String type. Default: io.debezium.embedded.spi.OffsetCommitPolicy.PeriodicCommitOffsetPolicy Group: consumer
      • offsetCommitTimeoutMs

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder offsetCommitTimeoutMs​(long offsetCommitTimeoutMs)
        Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds. The option is a: long type. Default: 5s Group: consumer
      • offsetCommitTimeoutMs

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder offsetCommitTimeoutMs​(String offsetCommitTimeoutMs)
        Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds. The option will be converted to a long type. Default: 5s Group: consumer
      • offsetStorageReplicationFactor

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder offsetStorageReplicationFactor​(int offsetStorageReplicationFactor)
        Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore. The option is a: int type. Group: consumer
      • columnPropagateSourceType

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder columnPropagateSourceType​(String columnPropagateSourceType)
        A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records. The option is a: java.lang.String type. Group: sqlserver
      • databaseHistory

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder databaseHistory​(String databaseHistory)
        The name of the DatabaseHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'database.history.' string. The option is a: java.lang.String type. Default: io.debezium.relational.history.FileDatabaseHistory Group: sqlserver
      • databaseHistoryKafkaBootstrapServers

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder databaseHistoryKafkaBootstrapServers​(String databaseHistoryKafkaBootstrapServers)
        A list of host/port pairs that the connector will use for establishing the initial connection to the Kafka cluster for retrieving database schema history previously stored by the connector. This should point to the same Kafka cluster used by the Kafka Connect process. The option is a: java.lang.String type. Group: sqlserver
      • databaseHistoryKafkaRecoveryAttempts

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder databaseHistoryKafkaRecoveryAttempts​(int databaseHistoryKafkaRecoveryAttempts)
        The number of attempts in a row that no data are returned from Kafka before recover completes. The maximum amount of time to wait after receiving no data is (recovery.attempts) x (recovery.poll.interval.ms). The option is a: int type. Default: 100 Group: sqlserver
      • databaseHistoryKafkaRecoveryAttempts

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder databaseHistoryKafkaRecoveryAttempts​(String databaseHistoryKafkaRecoveryAttempts)
        The number of attempts in a row that no data are returned from Kafka before recover completes. The maximum amount of time to wait after receiving no data is (recovery.attempts) x (recovery.poll.interval.ms). The option will be converted to a int type. Default: 100 Group: sqlserver
      • databaseHistoryKafkaRecoveryPollIntervalMs

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder databaseHistoryKafkaRecoveryPollIntervalMs​(int databaseHistoryKafkaRecoveryPollIntervalMs)
        The number of milliseconds to wait while polling for persisted data during recovery. The option is a: int type. Default: 100ms Group: sqlserver
      • databaseServerName

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder databaseServerName​(String databaseServerName)
        Unique name that identifies the database server and all recorded offsets, and that is used as a prefix for all schemas and topics. Each distinct installation should have a separate namespace and be monitored by at most one Debezium connector. The option is a: java.lang.String type. Required: true Group: sqlserver
      • datatypePropagateSourceType

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder datatypePropagateSourceType​(String datatypePropagateSourceType)
        A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records. The option is a: java.lang.String type. Group: sqlserver
      • decimalHandlingMode

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder decimalHandlingMode​(String decimalHandlingMode)
        Specify how DECIMAL and NUMERIC columns should be represented in change events, including:'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers. The option is a: java.lang.String type. Default: precise Group: sqlserver
      • eventProcessingFailureHandlingMode

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder eventProcessingFailureHandlingMode​(String eventProcessingFailureHandlingMode)
        Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including:'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped;'ignore' the problematic event will be skipped. The option is a: java.lang.String type. Default: fail Group: sqlserver
      • heartbeatIntervalMs

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder heartbeatIntervalMs​(int heartbeatIntervalMs)
        Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. The option is a: int type. Default: 0ms Group: sqlserver
      • heartbeatIntervalMs

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder heartbeatIntervalMs​(String heartbeatIntervalMs)
        Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. The option will be converted to a int type. Default: 0ms Group: sqlserver
      • includeSchemaChanges

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder includeSchemaChanges​(boolean includeSchemaChanges)
        Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s).The default is 'true'. This is independent of how the connector internally records database history. The option is a: boolean type. Default: true Group: sqlserver
      • includeSchemaChanges

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder includeSchemaChanges​(String includeSchemaChanges)
        Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s).The default is 'true'. This is independent of how the connector internally records database history. The option will be converted to a boolean type. Default: true Group: sqlserver
      • maxQueueSize

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder maxQueueSize​(int maxQueueSize)
        Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. The option is a: int type. Default: 8192 Group: sqlserver
      • maxQueueSize

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder maxQueueSize​(String maxQueueSize)
        Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. The option will be converted to a int type. Default: 8192 Group: sqlserver
      • messageKeyColumns

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder messageKeyColumns​(String messageKeyColumns)
        A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':',where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector,and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key.Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id. The option is a: java.lang.String type. Group: sqlserver
      • snapshotIsolationMode

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder snapshotIsolationMode​(String snapshotIsolationMode)
        Controls which transaction isolation level is used and how long the connector locks the monitored tables. The default is 'repeatable_read', which means that repeatable read isolation level is used. In addition, exclusive locks are taken only during schema snapshot. Using a value of 'exclusive' ensures that the connector holds the exclusive lock (and thus prevents any reads and updates) for all monitored tables during the entire snapshot duration. When 'snapshot' is specified, connector runs the initial snapshot in SNAPSHOT isolation level, which guarantees snapshot consistency. In addition, neither table nor row-level locks are held. When 'read_committed' is specified, connector runs the initial snapshot in READ COMMITTED isolation level. No long-running locks are taken, so that initial snapshot does not prevent other transactions from updating table rows. Snapshot consistency is not guaranteed.In 'read_uncommitted' mode neither table nor row-level locks are acquired, but connector does not guarantee snapshot consistency. The option is a: java.lang.String type. Default: repeatable_read Group: sqlserver
      • snapshotLockTimeoutMs

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder snapshotLockTimeoutMs​(long snapshotLockTimeoutMs)
        The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds. The option is a: long type. Default: 10s Group: sqlserver
      • snapshotLockTimeoutMs

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder snapshotLockTimeoutMs​(String snapshotLockTimeoutMs)
        The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds. The option will be converted to a long type. Default: 10s Group: sqlserver
      • snapshotMode

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder snapshotMode​(String snapshotMode)
        The criteria for running a snapshot upon startup of the connector. Options include: 'initial' (the default) to specify the connector should run a snapshot only when no offsets are available for the logical server name; 'schema_only' to specify the connector should run a snapshot of the schema when no offsets are available for the logical server name. The option is a: java.lang.String type. Default: initial Group: sqlserver
      • snapshotSelectStatementOverrides

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder snapshotSelectStatementOverrides​(String snapshotSelectStatementOverrides)
        This property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on thespecific connectors . Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB_NAME.TABLE_NAME' or 'snapshot.select.statement.overrides.SCHEMA_NAME.TABLE_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted. The option is a: java.lang.String type. Group: sqlserver
      • sourceTimestampMode

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder sourceTimestampMode​(String sourceTimestampMode)
        Configures the criteria of the attached timestamp within the source record (ts_ms).Options include:'commit', (default) the source timestamp is set to the instant where the record was committed in the database'processing', the source timestamp is set to the instant where the record was processed by Debezium. The option is a: java.lang.String type. Default: commit Group: sqlserver
      • timePrecisionMode

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder timePrecisionMode​(String timePrecisionMode)
        Time, date, and timestamps can be represented with different kinds of precisions, including:'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive_time_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision;'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision . The option is a: java.lang.String type. Default: adaptive Group: sqlserver
      • tombstonesOnDelete

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder tombstonesOnDelete​(boolean tombstonesOnDelete)
        Whether delete operations should be represented by a delete event and a subsquenttombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted. The option is a: boolean type. Default: false Group: sqlserver
      • tombstonesOnDelete

        default DebeziumSqlserverEndpointBuilderFactory.DebeziumSqlserverEndpointBuilder tombstonesOnDelete​(String tombstonesOnDelete)
        Whether delete operations should be represented by a delete event and a subsquenttombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted. The option will be converted to a boolean type. Default: false Group: sqlserver