congrats Icon
New! Announcing our next generation AI code completions
Read here
Tabnine Logo
StreamExecutionEnvironment
Code IndexAdd Tabnine to your IDE (free)

How to use
StreamExecutionEnvironment
in
org.apache.flink.streaming.api.environment

Best Java code snippets using org.apache.flink.streaming.api.environment.StreamExecutionEnvironment (Showing top 20 results out of 918)

Refine searchRefine arrow

  • DataStream
  • Tuple2
  • ExecutionConfig
  • TypeInformation
  • TypeExtractor
  • Properties
origin: apache/flink

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.getConfig().setGlobalJobParameters(params);
  carData = env.readTextFile(params.get("input")).map(new ParseCarData());
} else {
  System.out.println("Executing TopSpeedWindowing example with default input data set.");
  System.out.println("Use --input to specify file input.");
  carData = env.addSource(CarSource.create(2));
double triggerMeters = 50;
DataStream<Tuple4<Integer, Integer, Double, Long>> topSpeeds = carData
    .assignTimestampsAndWatermarks(new CarTimestamp())
    .keyBy(0)
    .window(GlobalWindows.create())
            return newDataPoint.f2 - oldDataPoint.f2;
        }, carData.getType().createSerializer(env.getConfig())))
    .maxBy(1);
  topSpeeds.writeAsText(params.get("output"));
} else {
  System.out.println("Printing result to stdout. Use --output to specify output path.");
env.execute("CarTopSpeedWindowingExample");
origin: apache/flink

public static void main(String[] args) throws Exception {
  ParameterTool params = ParameterTool.fromArgs(args);
  String outputPath = params.getRequired("outputPath");
  int recordsPerSecond = params.getInt("recordsPerSecond", 10);
  int duration = params.getInt("durationInSecond", 60);
  int offset = params.getInt("offsetInSecond", 0);
  StreamExecutionEnvironment sEnv = StreamExecutionEnvironment.getExecutionEnvironment();
  sEnv.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
  sEnv.enableCheckpointing(4000);
  sEnv.getConfig().setAutoWatermarkInterval(1000);
  // execute a simple pass through program.
  PeriodicSourceGenerator generator = new PeriodicSourceGenerator(
    recordsPerSecond, duration, offset);
  DataStream<Tuple> rows = sEnv.addSource(generator);
  DataStream<Tuple> result = rows
    .keyBy(1)
    .timeWindow(Time.seconds(5))
    .sum(0);
  result.writeAsText(outputPath + "/result.txt", FileSystem.WriteMode.OVERWRITE)
    .setParallelism(1);
  sEnv.execute();
}
origin: apache/flink

@Test(expected = NullPointerException.class)
public void testFailsWithoutUpperBound() {
  final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
  env.setParallelism(1);
  DataStream<Tuple2<String, Integer>> streamOne = env.fromElements(Tuple2.of("1", 1));
  DataStream<Tuple2<String, Integer>> streamTwo = env.fromElements(Tuple2.of("1", 1));
  streamOne
    .keyBy(new Tuple2KeyExtractor())
    .intervalJoin(streamTwo.keyBy(new Tuple2KeyExtractor()))
    .between(Time.milliseconds(0), null);
}
origin: apache/flink

public StreamGraph(StreamExecutionEnvironment environment) {
  this.environment = environment;
  this.executionConfig = environment.getConfig();
  this.checkpointConfig = environment.getCheckpointConfig();
  // create an empty new stream graph.
  clear();
}
origin: apache/flink

private static void runJob() throws Exception {
  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.fromElements(1, 2, 3)
    .print();
  env.execute();
}
origin: apache/flink

  @Test
  public void testOperatorChainWithObjectReuseAndNoOutputOperators() throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    env.getConfig().enableObjectReuse();
    DataStream<Integer> input = env.fromElements(1, 2, 3);
    input.flatMap(new FlatMapFunction<Integer, Integer>() {
      @Override
      public void flatMap(Integer value, Collector<Integer> out) throws Exception {
        out.collect(value << 1);
      }
    });
    env.execute();
  }
}
origin: apache/flink

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(parallelism);
env.setRestartStrategy(RestartStrategies.noRestart()); // fail immediately
env.getConfig().disableSysoutLogging();
Properties customProps = new Properties();
customProps.putAll(standardProps);
customProps.putAll(secureProps);
customProps.setProperty("auto.offset.reset", "none"); // test that "none" leads to an exception
FlinkKafkaConsumerBase<String> source = kafkaServer.getConsumer(topic, new SimpleStringSchema(), customProps);
DataStreamSource<String> consuming = env.addSource(source);
consuming.addSink(new DiscardingSink<String>());
  env.execute("Test auto offset reset none");
} catch (Throwable e) {
origin: apache/flink

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().disableSysoutLogging();
env.setParallelism(parallelism);
Properties readProps = new Properties();
readProps.putAll(standardProps);
origin: apache/flink

    StreamExecutionEnvironment.getExecutionEnvironment(),
    kafkaServer,
    topic,
    new TypeInformationSerializationSchema<>(BasicTypeInfo.INT_TYPE_INFO, new ExecutionConfig());
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(500);
env.setParallelism(parallelism);
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(1, 0));
env.getConfig().disableSysoutLogging();
env.setBufferTimeout(0);
Properties props = new Properties();
props.putAll(standardProps);
props.putAll(secureProps);
FlinkKafkaConsumerBase<Integer> kafkaSource = kafkaServer.getConsumer(topic, schema, props);
  .addSource(kafkaSource)
  .map(new PartitionValidatingMapper(numPartitions, 1))
  .map(new FailingIdentityMapper<Integer>(failAfterElements))
origin: apache/flink

  return;
Properties config = new Properties();
config.setProperty("bootstrap.servers", parameterTool.getRequired("bootstrap.servers"));
config.setProperty("group.id", parameterTool.getRequired("group.id"));
config.setProperty("zookeeper.connect", parameterTool.getRequired("zookeeper.connect"));
String schemaRegistryUrl = parameterTool.getRequired("schema-registry-url");
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().disableSysoutLogging();
  .addSource(
    new FlinkKafkaConsumer010<>(
      parameterTool.getRequired("input-topic"),
env.execute("Kafka 0.10 Confluent Schema Registry AVRO Example");
origin: apache/flink

/**
 * Test that ensures that DeserializationSchema.isEndOfStream() is properly evaluated.
 *
 * @throws Exception
 */
public void runEndOfStreamTest() throws Exception {
  final int elementCount = 300;
  final String topic = writeSequence("testEndOfStream", elementCount, 1, 1);
  // read using custom schema
  final StreamExecutionEnvironment env1 = StreamExecutionEnvironment.getExecutionEnvironment();
  env1.setParallelism(1);
  env1.getConfig().setRestartStrategy(RestartStrategies.noRestart());
  env1.getConfig().disableSysoutLogging();
  Properties props = new Properties();
  props.putAll(standardProps);
  props.putAll(secureProps);
  DataStream<Tuple2<Integer, Integer>> fromKafka = env1.addSource(kafkaServer.getConsumer(topic, new FixedNumberDeserializationSchema(elementCount), props));
  fromKafka.flatMap(new FlatMapFunction<Tuple2<Integer, Integer>, Void>() {
    @Override
    public void flatMap(Tuple2<Integer, Integer> value, Collector<Void> out) throws Exception {
      // noop ;)
    }
  });
  tryExecute(env1, "Consume " + elementCount + " elements from Kafka");
  deleteTestTopic(topic);
}
origin: apache/flink

  TypeInformation.of(new TypeHint<Tuple2<Integer, Integer>>() {});
    new TypeInformationSerializationSchema<>(resultType, new ExecutionConfig()));
StreamExecutionEnvironment writeEnv = StreamExecutionEnvironment.getExecutionEnvironment();
writeEnv.getConfig().setRestartStrategy(RestartStrategies.noRestart());
writeEnv.getConfig().disableSysoutLogging();
DataStream<Tuple2<Integer, Integer>> stream = writeEnv.addSource(new RichParallelSourceFunction<Tuple2<Integer, Integer>>() {
producerProperties.setProperty("retries", "0");
producerProperties.putAll(secureProps);
  writeEnv.execute("Write sequence");
origin: apache/flink

@Test
@SuppressWarnings("rawtypes")
public void testReduceProcessingTime() throws Exception {
  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
  DataStream<Tuple2<String, Integer>> source = env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
  DataStream<Tuple2<String, Integer>> window1 = source
      .windowAll(SlidingProcessingTimeWindows.of(Time.of(1, TimeUnit.SECONDS), Time.of(100, TimeUnit.MILLISECONDS)))
      .reduce(new DummyReducer());
  OneInputTransformation<Tuple2<String, Integer>, Tuple2<String, Integer>> transform = (OneInputTransformation<Tuple2<String, Integer>, Tuple2<String, Integer>>) window1.getTransformation();
  OneInputStreamOperator<Tuple2<String, Integer>, Tuple2<String, Integer>> operator = transform.getOperator();
  Assert.assertTrue(operator instanceof WindowOperator);
  WindowOperator<String, Tuple2<String, Integer>, ?, ?, ?> winOperator = (WindowOperator<String, Tuple2<String, Integer>, ?, ?, ?>) operator;
  Assert.assertTrue(winOperator.getTrigger() instanceof ProcessingTimeTrigger);
  Assert.assertTrue(winOperator.getWindowAssigner() instanceof SlidingProcessingTimeWindows);
  Assert.assertTrue(winOperator.getStateDescriptor() instanceof ReducingStateDescriptor);
  processElementAndEnsureOutput(winOperator, winOperator.getKeySelector(), BasicTypeInfo.STRING_TYPE_INFO, new Tuple2<>("hello", 1));
}
origin: apache/flink

/**
 * This verifies that an event time source works when setting stream time characteristic to
 * processing time. In this case, the watermarks should just be swallowed.
 */
@Test
public void testEventTimeSourceWithProcessingTime() throws Exception {
  StreamExecutionEnvironment env =
      StreamExecutionEnvironment.getExecutionEnvironment();
  env.setParallelism(2);
  env.getConfig().disableSysoutLogging();
  env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
  DataStream<Integer> source1 = env.addSource(new MyTimestampSource(0, 10));
  source1
    .map(new IdentityMap())
    .transform("Watermark Check", BasicTypeInfo.INT_TYPE_INFO, new CustomOperator(false));
  env.execute();
  // verify that we don't get any watermarks, the source is used as watermark source in
  // other tests, so it normally emits watermarks
  Assert.assertTrue(CustomOperator.finalWatermarks[0].size() == 0);
}
origin: apache/flink

@Test
public void testErrorOnEventTimeWithoutTimestamps() {
  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.setParallelism(2);
  env.getConfig().disableSysoutLogging();
  env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
  DataStream<Tuple2<String, Integer>> source1 =
      env.fromElements(new Tuple2<>("a", 1), new Tuple2<>("b", 2));
  source1
      .keyBy(0)
      .window(TumblingEventTimeWindows.of(Time.seconds(5)))
      .reduce(new ReduceFunction<Tuple2<String, Integer>>() {
        @Override
        public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2)  {
          return value1;
        }
      })
      .print();
  try {
    env.execute();
    fail("this should fail with an exception");
  } catch (Exception e) {
    // expected
  }
}
origin: apache/flink

@Test
@SuppressWarnings({"rawtypes", "unchecked"})
public void testFoldWithEvictor() throws Exception {
  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
  DataStream<Tuple2<String, Integer>> source = env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
  DataStream<Tuple3<String, String, Integer>> window1 = source
      .windowAll(SlidingEventTimeWindows.of(Time.of(1, TimeUnit.SECONDS), Time.of(100, TimeUnit.MILLISECONDS)))
      .evictor(CountEvictor.of(100))
      .fold(new Tuple3<>("", "", 1), new DummyFolder());
  OneInputTransformation<Tuple2<String, Integer>, Tuple3<String, String, Integer>> transform =
      (OneInputTransformation<Tuple2<String, Integer>, Tuple3<String, String, Integer>>) window1.getTransformation();
  OneInputStreamOperator<Tuple2<String, Integer>, Tuple3<String, String, Integer>> operator = transform.getOperator();
  Assert.assertTrue(operator instanceof EvictingWindowOperator);
  EvictingWindowOperator<String, Tuple2<String, Integer>, ?, ?> winOperator = (EvictingWindowOperator<String, Tuple2<String, Integer>, ?, ?>) operator;
  Assert.assertTrue(winOperator.getTrigger() instanceof EventTimeTrigger);
  Assert.assertTrue(winOperator.getWindowAssigner() instanceof SlidingEventTimeWindows);
  Assert.assertTrue(winOperator.getEvictor() instanceof CountEvictor);
  Assert.assertTrue(winOperator.getStateDescriptor() instanceof ListStateDescriptor);
  winOperator.setOutputType((TypeInformation) window1.getType(), new ExecutionConfig());
  processElementAndEnsureOutput(winOperator, winOperator.getKeySelector(), BasicTypeInfo.STRING_TYPE_INFO, new Tuple2<>("hello", 1));
}
origin: apache/flink

final int numUniqueKeys = 12;
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
env.setMaxParallelism(maxParallelism);
env.setParallelism(parallelism);
env.enableCheckpointing(100);
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(1, 0L));
env.addSource(new RandomTupleSource(numEventsPerInstance, numUniqueKeys))
  .keyBy(0)
  .addSink(new ToPartitionFileSink(partitionFiles));
env.execute();
  env.addSource(new FromPartitionFileSource(partitionFiles)),
  (KeySelector<Tuple2<Integer, Integer>, Integer>) value -> value.f0,
  TypeInformation.of(Integer.class))
  .timeWindow(Time.seconds(1)) // test that also timers and aggregated state work as expected
  .reduce((ReduceFunction<Tuple2<Integer, Integer>>) (value1, value2) ->
    new Tuple2<>(value1.f0, value1.f1 + value2.f1))
  .addSink(new ValidatingSink(numEventsPerInstance * parallelism)).setParallelism(1);
env.execute();
origin: apache/flink

private static void runPartitioningProgram(int parallelism) throws Exception {
  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.setParallelism(parallelism);
  env.getConfig().enableObjectReuse();
  env.setBufferTimeout(5L);
  env.enableCheckpointing(1000, CheckpointingMode.AT_LEAST_ONCE);
  env
    .addSource(new TimeStampingSource())
    .map(new IdMapper<Tuple2<Long, Long>>())
    .keyBy(0)
    .addSink(new TimestampingSink());
  env.execute("Partitioning Program");
}
origin: apache/flink

/**
 * Creates a streaming JobGraph from the StreamEnvironment.
 */
private JobGraph createJobGraph(
  int parallelism,
  int numberOfRetries,
  long restartDelay) {
  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  env.setParallelism(parallelism);
  env.disableOperatorChaining();
  env.getConfig().setRestartStrategy(RestartStrategies.fixedDelayRestart(numberOfRetries, restartDelay));
  env.getConfig().disableSysoutLogging();
  DataStream<Integer> stream = env
    .addSource(new InfiniteTestSource())
    .shuffle()
    .map(new StatefulCounter());
  stream.addSink(new DiscardingSink<>());
  return env.getStreamGraph().getJobGraph();
}
origin: apache/flink

  public static void main(String[] args) throws Exception {
    final ParameterTool pt = ParameterTool.fromArgs(args);
    final String checkpointDir = pt.getRequired("checkpoint.dir");

    final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    env.setStateBackend(new FsStateBackend(checkpointDir));
    env.setRestartStrategy(RestartStrategies.noRestart());
    env.enableCheckpointing(1000L);
    env.getConfig().disableGenericTypes();

    env.addSource(new MySource()).uid("my-source")
        .keyBy(anInt -> 0)
        .map(new MyStatefulFunction()).uid("my-map")
        .addSink(new DiscardingSink<>()).uid("my-sink");
    env.execute();
  }
}
org.apache.flink.streaming.api.environmentStreamExecutionEnvironment

Javadoc

The StreamExecutionEnvironment is the context in which a streaming program is executed. A LocalStreamEnvironment will cause execution in the current JVM, a RemoteStreamEnvironment will cause execution on a remote setup.

The environment provides methods to control the job execution (such as setting the parallelism or the fault tolerance/checkpointing parameters) and to interact with the outside world (data access).

Most used methods

  • execute
  • getExecutionEnvironment
    Creates an execution environment that represents the context in which the program is currently execu
  • addSource
    Ads a data source with a custom type information thus opening a DataStream. Only in very special cas
  • getConfig
    Gets the config object.
  • enableCheckpointing
    Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be
  • setStreamTimeCharacteristic
    Sets the time characteristic for all streams create from this environment, e.g., processing time, ev
  • setParallelism
    Sets the parallelism for operations executed through this environment. Setting a parallelism of x he
  • fromElements
    Creates a new data stream that contains the given elements. The elements must all be of the same typ
  • setStateBackend
    Sets the state backend that describes how to store and checkpoint operator state. It defines both wh
  • createLocalEnvironment
    Creates a LocalStreamEnvironment. The local execution environment will run the program in a multi-th
  • fromCollection
    Creates a data stream from the given iterator.Because the iterator will remain unmodified until the
  • getCheckpointConfig
    Gets the checkpoint config, which defines values like checkpoint interval, delay between checkpoints
  • fromCollection,
  • getCheckpointConfig,
  • getParallelism,
  • getStreamGraph,
  • setRestartStrategy,
  • socketTextStream,
  • readTextFile,
  • generateSequence,
  • clean,
  • getStreamTimeCharacteristic

Popular in Java

  • Making http post requests using okhttp
  • scheduleAtFixedRate (ScheduledExecutorService)
  • scheduleAtFixedRate (Timer)
  • onRequestPermissionsResult (Fragment)
  • DecimalFormat (java.text)
    A concrete subclass of NumberFormat that formats decimal numbers. It has a variety of features desig
  • LinkedList (java.util)
    Doubly-linked list implementation of the List and Dequeinterfaces. Implements all optional list oper
  • Properties (java.util)
    A Properties object is a Hashtable where the keys and values must be Strings. Each property can have
  • StringTokenizer (java.util)
    Breaks a string into tokens; new code should probably use String#split.> // Legacy code: StringTo
  • DataSource (javax.sql)
    An interface for the creation of Connection objects which represent a connection to a database. This
  • JTable (javax.swing)
  • Top 12 Jupyter Notebook Extensions
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimAtomGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyStudentsTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now