Tabnine Logo
ExecutionConfig
Code IndexAdd Tabnine to your IDE (free)

How to use
ExecutionConfig
in
org.apache.flink.api.common

Best Java code snippets using org.apache.flink.api.common.ExecutionConfig (Showing top 20 results out of 927)

Refine searchRefine arrow

  • TypeInformation
  • ExecutionEnvironment
  • Configuration
  • StreamExecutionEnvironment
  • DataStream
  • ConcurrentLinkedQueue
  • JobID
  • Properties
  • TypeExtractor
  • AtomicInteger
origin: apache/flink

  @Override
  protected <T> TypeSerializer<T> createSerializer(Class<T> type) {
    TypeInformation<T> typeInfo = TypeExtractor.getForClass(type);
    return typeInfo.createSerializer(new ExecutionConfig());
  }
}
origin: apache/flink

final ExecutionConfig config = new ExecutionConfig();
  config.enableClosureCleaner();
} else {
  config.disableClosureCleaner();
  config.enableForceAvro();
} else {
  config.disableForceAvro();
  config.enableForceKryo();
} else {
  config.disableForceKryo();
  config.disableGenericTypes();
} else {
  config.enableGenericTypes();
  config.enableObjectReuse();
} else {
  config.disableObjectReuse();
  config.enableSysoutLogging();
} else {
  config.disableSysoutLogging();
config.setParallelism(parallelism);
assertEquals(closureCleanerEnabled, copy1.isClosureCleanerEnabled());
origin: apache/flink

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().setGlobalJobParameters(params);
if (params.has("input")) {
  text = env.readTextFile(params.get("input"));
} else {
  System.out.println("Executing WordCount example with default input data set.");
  text.flatMap(new Tokenizer())
  counts.writeAsText(params.get("output"));
} else {
  System.out.println("Printing result to stdout. Use --output to specify output path.");
  counts.print();
origin: apache/flink

public ArchivedExecutionConfig(ExecutionConfig ec) {
  executionMode = ec.getExecutionMode().name();
  if (ec.getRestartStrategy() != null) {
    restartStrategyDescription = ec.getRestartStrategy().getDescription();
  } else {
    restartStrategyDescription = "default";
  }
  parallelism = ec.getParallelism();
  objectReuseEnabled = ec.isObjectReuseEnabled();
  if (ec.getGlobalJobParameters() != null
      && ec.getGlobalJobParameters().toMap() != null) {
    globalJobParameters = ec.getGlobalJobParameters().toMap();
  } else {
    globalJobParameters = Collections.emptyMap();
  }
}
origin: apache/flink

public static StreamExecutionEnvironment prepareExecutionEnv(ParameterTool parameterTool)
  throws Exception {
  if (parameterTool.getNumberOfParameters() < 5) {
    System.out.println("Missing parameters!\n" +
      "Usage: Kafka --input-topic <topic> --output-topic <topic> " +
      "--bootstrap.servers <kafka brokers> " +
      "--zookeeper.connect <zk quorum> --group.id <some id>");
    throw new Exception("Missing parameters!\n" +
      "Usage: Kafka --input-topic <topic> --output-topic <topic> " +
      "--bootstrap.servers <kafka brokers> " +
      "--zookeeper.connect <zk quorum> --group.id <some id>");
  }
  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.getConfig().disableSysoutLogging();
  env.getConfig().setRestartStrategy(RestartStrategies.fixedDelayRestart(4, 10000));
  env.enableCheckpointing(5000); // create a checkpoint every 5 seconds
  env.getConfig().setGlobalJobParameters(parameterTool); // make parameters available in the web interface
  env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
  return env;
}
origin: apache/flink

@Override
protected TypeSerializer<TestUserClassBase> createSerializer() {
  // only register one of the three child classes, the third child class is NO POJO
  ExecutionConfig conf = new ExecutionConfig();
  conf.registerPojoType(TestUserClass1.class);
  TypeSerializer<TestUserClassBase> serializer = type.createSerializer(conf);
  assert(serializer instanceof PojoSerializer);
  return serializer;
}
origin: apache/flink

  new ListStateDescriptor<>("window-contents", STRING_INT_TUPLE.createSerializer(new ExecutionConfig()));
    new TimeWindow.Serializer(),
    new TupleKeySelector(),
    BasicTypeInfo.STRING_TYPE_INFO.createSerializer(new ExecutionConfig()),
    windowStateDesc,
    new InternalIterableWindowFunction<>(new PassThroughFunction()),
ConcurrentLinkedQueue<Object> expected = new ConcurrentLinkedQueue<>();
expected.add(new StreamRecord<>(new Tuple2<>("key2", 1), 3999));
expected.add(new Watermark(4998));
origin: apache/flink

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(parallelism);
env.setRestartStrategy(RestartStrategies.noRestart()); // fail immediately
env.getConfig().disableSysoutLogging();
Properties customProps = new Properties();
customProps.putAll(standardProps);
customProps.putAll(secureProps);
customProps.setProperty("auto.offset.reset", "none"); // test that "none" leads to an exception
FlinkKafkaConsumerBase<String> source = kafkaServer.getConsumer(topic, new SimpleStringSchema(), customProps);
origin: apache/flink

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().disableSysoutLogging();
env.setParallelism(parallelism);
Properties readProps = new Properties();
readProps.putAll(standardProps);
origin: apache/flink

/**
 * Test that ensures that DeserializationSchema.isEndOfStream() is properly evaluated.
 *
 * @throws Exception
 */
public void runEndOfStreamTest() throws Exception {
  final int elementCount = 300;
  final String topic = writeSequence("testEndOfStream", elementCount, 1, 1);
  // read using custom schema
  final StreamExecutionEnvironment env1 = StreamExecutionEnvironment.getExecutionEnvironment();
  env1.setParallelism(1);
  env1.getConfig().setRestartStrategy(RestartStrategies.noRestart());
  env1.getConfig().disableSysoutLogging();
  Properties props = new Properties();
  props.putAll(standardProps);
  props.putAll(secureProps);
  DataStream<Tuple2<Integer, Integer>> fromKafka = env1.addSource(kafkaServer.getConsumer(topic, new FixedNumberDeserializationSchema(elementCount), props));
  fromKafka.flatMap(new FlatMapFunction<Tuple2<Integer, Integer>, Void>() {
    @Override
    public void flatMap(Tuple2<Integer, Integer> value, Collector<Void> out) throws Exception {
      // noop ;)
    }
  });
  tryExecute(env1, "Consume " + elementCount + " elements from Kafka");
  deleteTestTopic(topic);
}
origin: apache/flink

    StreamExecutionEnvironment.getExecutionEnvironment(),
    kafkaServer,
    topic, parallelism, numElementsPerPartition, true);
    new TypeInformationSerializationSchema<>(BasicTypeInfo.INT_TYPE_INFO, new ExecutionConfig());
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(parallelism);
env.enableCheckpointing(500);
env.setRestartStrategy(RestartStrategies.noRestart());
env.getConfig().disableSysoutLogging();
Properties props = new Properties();
props.putAll(standardProps);
props.putAll(secureProps);
FlinkKafkaConsumerBase<Integer> kafkaSource = kafkaServer.getConsumer(topic, schema, props);
origin: apache/flink

@Test
@SuppressWarnings("unchecked")
public void testTumblingEventTimeWindowsApply() throws Exception {
  closeCalled.set(0);
  final int windowSize = 3;
  ListStateDescriptor<Tuple2<String, Integer>> stateDesc = new ListStateDescriptor<>("window-contents",
      STRING_INT_TUPLE.createSerializer(new ExecutionConfig()));
  WindowOperator<String, Tuple2<String, Integer>, Iterable<Tuple2<String, Integer>>, Tuple2<String, Integer>, TimeWindow> operator = new WindowOperator<>(
      TumblingEventTimeWindows.of(Time.of(windowSize, TimeUnit.SECONDS)),
      new TimeWindow.Serializer(),
      new TupleKeySelector(),
      BasicTypeInfo.STRING_TYPE_INFO.createSerializer(new ExecutionConfig()),
      stateDesc,
      new InternalIterableWindowFunction<>(new RichSumReducer<TimeWindow>()),
      EventTimeTrigger.create(),
      0,
      null /* late data output tag */);
  testTumblingEventTimeWindows(operator);
  // we close once in the rest...
  Assert.assertEquals("Close was not called.", 2, closeCalled.get());
}
origin: apache/flink

properties.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
  watermarksPunctuated,
  runtimeContext.getProcessingTimeService(),
  runtimeContext.getExecutionConfig().getAutoWatermarkInterval(),
  runtimeContext.getUserCodeClassLoader(),
  runtimeContext.getTaskNameWithSubtasks(),
origin: apache/flink

@Test
@SuppressWarnings("unchecked")
public void testTumblingEventTimeWindowsReduce() throws Exception {
  closeCalled.set(0);
  final int windowSize = 3;
  ReducingStateDescriptor<Tuple2<String, Integer>> stateDesc = new ReducingStateDescriptor<>("window-contents",
      new SumReducer(),
      STRING_INT_TUPLE.createSerializer(new ExecutionConfig()));
  WindowOperator<String, Tuple2<String, Integer>, Tuple2<String, Integer>, Tuple2<String, Integer>, TimeWindow> operator = new WindowOperator<>(
      TumblingEventTimeWindows.of(Time.of(windowSize, TimeUnit.SECONDS)),
      new TimeWindow.Serializer(),
      new TupleKeySelector(),
      BasicTypeInfo.STRING_TYPE_INFO.createSerializer(new ExecutionConfig()),
      stateDesc,
      new InternalSingleValueWindowFunction<>(new PassThroughWindowFunction<String, TimeWindow, Tuple2<String, Integer>>()),
      EventTimeTrigger.create(),
      0,
      null /* late data output tag */);
  testTumblingEventTimeWindows(operator);
}
origin: apache/flink

@Test
public void testNegativeTimestamps() throws Exception {
  final AssignerWithPeriodicWatermarks<Long> assigner = new NeverWatermarkExtractor();
  final TimestampsAndPeriodicWatermarksOperator<Long> operator =
      new TimestampsAndPeriodicWatermarksOperator<Long>(assigner);
  OneInputStreamOperatorTestHarness<Long, Long> testHarness =
      new OneInputStreamOperatorTestHarness<Long, Long>(operator);
  testHarness.getExecutionConfig().setAutoWatermarkInterval(50);
  testHarness.open();
  long[] values = { Long.MIN_VALUE, -1L, 0L, 1L, 2L, 3L, Long.MAX_VALUE };
  for (long value : values) {
    testHarness.processElement(new StreamRecord<>(value));
  }
  ConcurrentLinkedQueue<Object> output = testHarness.getOutput();
  for (long value: values) {
    assertEquals(value, ((StreamRecord<?>) output.poll()).getTimestamp());
  }
}
origin: apache/flink

/**
 * This verifies that an event time source works when setting stream time characteristic to
 * processing time. In this case, the watermarks should just be swallowed.
 */
@Test
public void testEventTimeSourceWithProcessingTime() throws Exception {
  StreamExecutionEnvironment env =
      StreamExecutionEnvironment.getExecutionEnvironment();
  env.setParallelism(2);
  env.getConfig().disableSysoutLogging();
  env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
  DataStream<Integer> source1 = env.addSource(new MyTimestampSource(0, 10));
  source1
    .map(new IdentityMap())
    .transform("Watermark Check", BasicTypeInfo.INT_TYPE_INFO, new CustomOperator(false));
  env.execute();
  // verify that we don't get any watermarks, the source is used as watermark source in
  // other tests, so it normally emits watermarks
  Assert.assertTrue(CustomOperator.finalWatermarks[0].size() == 0);
}
origin: apache/flink

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(PARALLELISM);
env.setStreamTimeCharacteristic(timeCharacteristic);
env.getConfig().setAutoWatermarkInterval(10);
env.enableCheckpointing(100);
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(1, 0));
env.getConfig().disableSysoutLogging();
SinkValidatorUpdaterAndChecker updaterAndChecker =
  new SinkValidatorUpdaterAndChecker(numElements, 1);
    .keyBy(0)
    .timeWindow(Time.of(100, MILLISECONDS))
    .reduce(new ReduceFunction<Tuple2<Long, IntType>>() {
origin: apache/flink

public static void main(String[] args) throws Exception {
  ParameterTool params = ParameterTool.fromArgs(args);
  String outputPath = params.getRequired("outputPath");
  int recordsPerSecond = params.getInt("recordsPerSecond", 10);
  int duration = params.getInt("durationInSecond", 60);
  int offset = params.getInt("offsetInSecond", 0);
  StreamExecutionEnvironment sEnv = StreamExecutionEnvironment.getExecutionEnvironment();
  sEnv.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
  sEnv.enableCheckpointing(4000);
  sEnv.getConfig().setAutoWatermarkInterval(1000);
  // execute a simple pass through program.
  PeriodicSourceGenerator generator = new PeriodicSourceGenerator(
    recordsPerSecond, duration, offset);
  DataStream<Tuple> rows = sEnv.addSource(generator);
  DataStream<Tuple> result = rows
    .keyBy(1)
    .timeWindow(Time.seconds(5))
    .sum(0);
  result.writeAsText(outputPath + "/result.txt", FileSystem.WriteMode.OVERWRITE)
    .setParallelism(1);
  sEnv.execute();
}
origin: apache/flink

/**
 * Ensure that the user can pass a custom configuration object to the LocalEnvironment.
 */
@Test
public void testLocalEnvironmentWithConfig() throws Exception {
  Configuration conf = new Configuration();
  conf.setInteger(TaskManagerOptions.NUM_TASK_SLOTS, PARALLELISM);
  final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment(conf);
  env.setParallelism(ExecutionConfig.PARALLELISM_AUTO_MAX);
  env.getConfig().disableSysoutLogging();
  DataSet<Integer> result = env.createInput(new ParallelismDependentInputFormat())
      .rebalance()
      .mapPartition(new RichMapPartitionFunction<Integer, Integer>() {
        @Override
        public void mapPartition(Iterable<Integer> values, Collector<Integer> out) throws Exception {
          out.collect(getRuntimeContext().getIndexOfThisSubtask());
        }
      });
  List<Integer> resultCollection = result.collect();
  assertEquals(PARALLELISM, resultCollection.size());
}
origin: apache/flink

private static void runPartitioningProgram(int parallelism) throws Exception {
  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.setParallelism(parallelism);
  env.getConfig().enableObjectReuse();
  env.setBufferTimeout(5L);
  env.enableCheckpointing(1000, CheckpointingMode.AT_LEAST_ONCE);
  env
    .addSource(new TimeStampingSource())
    .map(new IdMapper<Tuple2<Long, Long>>())
    .keyBy(0)
    .addSink(new TimestampingSink());
  env.execute("Partitioning Program");
}
org.apache.flink.api.commonExecutionConfig

Javadoc

A config to define the behavior of the program execution. It allows to define (among other options) the following settings:
  • The default parallelism of the program, i.e., how many parallel tasks to use for all functions that do not define a specific value directly.
  • The number of retries in the case of failed executions.
  • The delay between execution retries.
  • The ExecutionMode of the program: Batch or Pipelined. The default execution mode is ExecutionMode#PIPELINED
  • Enabling or disabling the "closure cleaner". The closure cleaner pre-processes the implementations of functions. In case they are (anonymous) inner classes, it removes unused references to the enclosing class to fix certain serialization-related problems and to reduce the size of the closure.
  • The config allows to register types and serializers to increase the efficiency of handling generic types and POJOs. This is usually only needed when the functions return not only the types declared in their signature, but also subclasses of those types.
  • The CodeAnalysisMode of the program: Enable hinting/optimizing or disable the "static code analyzer". The static code analyzer pre-interprets user-defined functions in order to get implementation insights for program improvements that can be printed to the log or automatically applied.

Most used methods

  • <init>
  • isObjectReuseEnabled
    Returns whether object reuse has been enabled or disabled. @see #enableObjectReuse()
  • disableSysoutLogging
    Disables the printing of progress update messages to System.out
  • getAutoWatermarkInterval
    Returns the interval of the automatic watermark emission.
  • setGlobalJobParameters
    Register a custom, serializable user configuration object.
  • enableObjectReuse
    Enables reusing objects that Flink internally uses for deserialization and passing data to user-code
  • setAutoWatermarkInterval
    Sets the interval of the automatic watermark emission. Watermarks are used throughout the streaming
  • disableObjectReuse
    Disables reusing objects that Flink internally uses for deserialization and passing data to user-cod
  • getRestartStrategy
    Returns the restart strategy which has been set for the current job.
  • isSysoutLoggingEnabled
    Gets whether progress update messages should be printed to System.out
  • registerKryoType
    Registers the given type with the serialization stack. If the type is eventually serialized as a POJ
  • registerTypeWithKryoSerializer
    Registers the given Serializer via its class as a serializer for the given type at the KryoSerialize
  • registerKryoType,
  • registerTypeWithKryoSerializer,
  • setRestartStrategy,
  • getParallelism,
  • addDefaultKryoSerializer,
  • getGlobalJobParameters,
  • getNumberOfExecutionRetries,
  • getRegisteredKryoTypes,
  • setParallelism,
  • getDefaultKryoSerializerClasses

Popular in Java

  • Reading from database using SQL prepared statement
  • getContentResolver (Context)
  • scheduleAtFixedRate (Timer)
  • setScale (BigDecimal)
  • BufferedWriter (java.io)
    Wraps an existing Writer and buffers the output. Expensive interaction with the underlying reader is
  • InputStream (java.io)
    A readable source of bytes.Most clients will use input streams that read data from the file system (
  • Collection (java.util)
    Collection is the root of the collection hierarchy. It defines operations on data collections and t
  • Date (java.util)
    A specific moment in time, with millisecond precision. Values typically come from System#currentTime
  • JPanel (javax.swing)
  • Logger (org.apache.log4j)
    This is the central class in the log4j package. Most logging operations, except configuration, are d
  • Top plugins for WebStorm
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now