/** * Disables the propagation of exceptions thrown when committing presumably timed out Kafka * transactions during recovery of the job. If a Kafka transaction is timed out, a commit will * never be successful. Hence, use this feature to avoid recovery loops of the Job. Exceptions * will still be logged to inform the user that data loss might have occurred. * * <p>Note that we use {@link System#currentTimeMillis()} to track the age of a transaction. * Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will * attempt at least one commit of the transaction before giving up.</p> */ @Override public FlinkKafkaProducer011<IN> ignoreFailuresAfterTransactionTimeout() { super.ignoreFailuresAfterTransactionTimeout(); return this; }
/** * Disables the propagation of exceptions thrown when committing presumably timed out Kafka * transactions during recovery of the job. If a Kafka transaction is timed out, a commit will * never be successful. Hence, use this feature to avoid recovery loops of the Job. Exceptions * will still be logged to inform the user that data loss might have occurred. * * <p>Note that we use {@link System#currentTimeMillis()} to track the age of a transaction. * Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will * attempt at least one commit of the transaction before giving up.</p> */ @Override public FlinkKafkaProducer<IN> ignoreFailuresAfterTransactionTimeout() { super.ignoreFailuresAfterTransactionTimeout(); return this; }
/** * Disables the propagation of exceptions thrown when committing presumably timed out Kafka * transactions during recovery of the job. If a Kafka transaction is timed out, a commit will * never be successful. Hence, use this feature to avoid recovery loops of the Job. Exceptions * will still be logged to inform the user that data loss might have occurred. * * <p>Note that we use {@link System#currentTimeMillis()} to track the age of a transaction. * Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will * attempt at least one commit of the transaction before giving up.</p> */ @Override public FlinkKafkaProducer011<IN> ignoreFailuresAfterTransactionTimeout() { super.ignoreFailuresAfterTransactionTimeout(); return this; }
/** * Disables the propagation of exceptions thrown when committing presumably timed out Kafka * transactions during recovery of the job. If a Kafka transaction is timed out, a commit will * never be successful. Hence, use this feature to avoid recovery loops of the Job. Exceptions * will still be logged to inform the user that data loss might have occurred. * * <p>Note that we use {@link System#currentTimeMillis()} to track the age of a transaction. * Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will * attempt at least one commit of the transaction before giving up.</p> */ @Override public FlinkKafkaProducer011<IN> ignoreFailuresAfterTransactionTimeout() { super.ignoreFailuresAfterTransactionTimeout(); return this; }
/** * Disables the propagation of exceptions thrown when committing presumably timed out Kafka * transactions during recovery of the job. If a Kafka transaction is timed out, a commit will * never be successful. Hence, use this feature to avoid recovery loops of the Job. Exceptions * will still be logged to inform the user that data loss might have occurred. * * <p>Note that we use {@link System#currentTimeMillis()} to track the age of a transaction. * Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will * attempt at least one commit of the transaction before giving up.</p> */ @Override public FlinkKafkaProducer011<IN> ignoreFailuresAfterTransactionTimeout() { super.ignoreFailuresAfterTransactionTimeout(); return this; }