congrats Icon
New! Announcing Tabnine Chat Beta
Learn More
Tabnine Logo
StreamExecutionEnvironment.createRemoteEnvironment
Code IndexAdd Tabnine to your IDE (free)

How to use
createRemoteEnvironment
method
in
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment

Best Java code snippets using org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createRemoteEnvironment (Showing top 9 results out of 315)

origin: apache/flink

/**
 * A thin wrapper layer over {@link StreamExecutionEnvironment#createRemoteEnvironment(String, int, String...)}.
 *
 * @param host The host name or address of the master (JobManager), where the
 * program should be executed.
 * @param port The port of the master (JobManager), where the program should
 * be executed.
 * @param jar_files The JAR files with code that needs to be shipped to the
 * cluster. If the program uses user-defined functions,
 * user-defined input formats, or any libraries, those must be
 * provided in the JAR files.
 * @return A remote environment that executes the program on a cluster.
 */
public PythonStreamExecutionEnvironment create_remote_execution_environment(
  String host, int port, String... jar_files) {
  return new PythonStreamExecutionEnvironment(
    StreamExecutionEnvironment.createRemoteEnvironment(host, port, jar_files), new Path(localTmpPath), scriptName);
}
origin: apache/flink

/**
 * A thin wrapper layer over {@link StreamExecutionEnvironment#createRemoteEnvironment(
 *String, int, Configuration, String...)}.
 *
 * @param host The host name or address of the master (JobManager), where the
 * program should be executed.
 * @param port The port of the master (JobManager), where the program should
 * be executed.
 * @param config The configuration used by the client that connects to the remote cluster.
 * @param jar_files The JAR files with code that needs to be shipped to the
 * cluster. If the program uses user-defined functions,
 * user-defined input formats, or any libraries, those must be
 * provided in the JAR files.
 * @return A remote environment that executes the program on a cluster.
 */
public PythonStreamExecutionEnvironment create_remote_execution_environment(
  String host, int port, Configuration config, String... jar_files) {
  return new PythonStreamExecutionEnvironment(
    StreamExecutionEnvironment.createRemoteEnvironment(host, port, config, jar_files), new Path(localTmpPath), scriptName);
}
origin: apache/flink

  /**
   * A thin wrapper layer over {@link StreamExecutionEnvironment#createRemoteEnvironment(
   *String, int, int, String...)}.
   *
   * @param host The host name or address of the master (JobManager), where the
   * program should be executed.
   * @param port The port of the master (JobManager), where the program should
   * be executed.
   * @param parallelism The parallelism to use during the execution.
   * @param jar_files The JAR files with code that needs to be shipped to the
   * cluster. If the program uses user-defined functions,
   * user-defined input formats, or any libraries, those must be
   * provided in the JAR files.
   * @return A remote environment that executes the program on a cluster.
   */
  public PythonStreamExecutionEnvironment create_remote_execution_environment(
    String host, int port, int parallelism, String... jar_files) {
    return new PythonStreamExecutionEnvironment(
      StreamExecutionEnvironment.createRemoteEnvironment(host, port, parallelism, jar_files), new Path(localTmpPath), scriptName);
  }
}
origin: apache/flink

/**
 * Verifies that the port passed to the RemoteStreamEnvironment is used for connecting to the cluster.
 */
@Test
public void testPortForwarding() throws Exception {
  String host = "fakeHost";
  int port = 99;
  JobExecutionResult expectedResult = new JobExecutionResult(null, 0, null);
  RestClusterClient mockedClient = Mockito.mock(RestClusterClient.class);
  when(mockedClient.run(Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any()))
    .thenReturn(expectedResult);
  PowerMockito.whenNew(RestClusterClient.class).withAnyArguments().thenAnswer((invocation) -> {
      Object[] args = invocation.getArguments();
      Configuration config = (Configuration) args[0];
      Assert.assertEquals(host, config.getString(RestOptions.ADDRESS));
      Assert.assertEquals(port, config.getInteger(RestOptions.PORT));
      return mockedClient;
    }
  );
  final Configuration clientConfiguration = new Configuration();
  final StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment(
    host, port, clientConfiguration);
  env.fromElements(1).map(x -> x * 2);
  JobExecutionResult actualResult = env.execute("fakeJobName");
  Assert.assertEquals(expectedResult, actualResult);
}
origin: apache/flink

StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment(
  "localhost",
  1337, // not needed since we use ZooKeeper
origin: dataArtisans/flink-dataflow

  String[] parts = masterUrl.split(":");
  List<String> stagingFiles = options.getFilesToStage();
  this.flinkStreamEnv = StreamExecutionEnvironment.createRemoteEnvironment(parts[0],
      Integer.parseInt(parts[1]), stagingFiles.toArray(new String[stagingFiles.size()]));
} else {
origin: org.apache.beam/beam-runners-flink_2.10

 String[] parts = masterUrl.split(":");
 List<String> stagingFiles = options.getFilesToStage();
 flinkStreamEnv = StreamExecutionEnvironment.createRemoteEnvironment(parts[0],
   Integer.parseInt(parts[1]), stagingFiles.toArray(new String[stagingFiles.size()]));
} else {
origin: org.apache.beam/beam-runners-flink

clientConfig.setInteger(RestOptions.PORT, Integer.parseInt(parts.get(1)));
flinkStreamEnv =
  StreamExecutionEnvironment.createRemoteEnvironment(
    parts.get(0),
    Integer.parseInt(parts.get(1)),
origin: streampipes/streampipes-ce

@Override
public void prepareRuntime() throws SpRuntimeException {
 if (debug) {
  this.env = StreamExecutionEnvironment.createLocalEnvironment();
 } else {
  this.env = StreamExecutionEnvironment
      .createRemoteEnvironment(config.getHost(), config.getPort(), config.getJarFile());
 }
 appendEnvironmentConfig(this.env);
 // Add the first source to the topology
 DataStream<Map<String, Object>> messageStream1;
 SourceFunction<String> source1 = getStream1Source();
 if (source1 != null) {
  messageStream1 = env
      .addSource(source1).flatMap(new JsonToMapFormat()).flatMap(new StatisticLogger(getGraph()));
 } else {
  throw new SpRuntimeException("At least one source must be defined for a flink sepa");
 }
 DataStream<Map<String, Object>> messageStream2;
 SourceFunction<String> source2 = getStream2Source();
 if (source2 != null) {
  messageStream2 = env
      .addSource(source2).flatMap(new JsonToMapFormat()).flatMap(new StatisticLogger(getGraph()));
  appendExecutionConfig(messageStream1, messageStream2);
 } else {
  appendExecutionConfig(messageStream1);
 }
}
org.apache.flink.streaming.api.environmentStreamExecutionEnvironmentcreateRemoteEnvironment

Javadoc

Creates a RemoteStreamEnvironment. The remote environment sends (parts of) the program to a cluster for execution. Note that all file paths used in the program must be accessible from the cluster. The execution will use the specified parallelism.

Popular methods of StreamExecutionEnvironment

  • execute
  • getExecutionEnvironment
    Creates an execution environment that represents the context in which the program is currently execu
  • addSource
    Ads a data source with a custom type information thus opening a DataStream. Only in very special cas
  • getConfig
    Gets the config object.
  • enableCheckpointing
    Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be
  • setStreamTimeCharacteristic
    Sets the time characteristic for all streams create from this environment, e.g., processing time, ev
  • setParallelism
    Sets the parallelism for operations executed through this environment. Setting a parallelism of x he
  • fromElements
    Creates a new data stream that contains the given elements. The elements must all be of the same typ
  • setStateBackend
    Sets the state backend that describes how to store and checkpoint operator state. It defines both wh
  • createLocalEnvironment
    Creates a LocalStreamEnvironment. The local execution environment will run the program in a multi-th
  • fromCollection
    Creates a data stream from the given iterator.Because the iterator will remain unmodified until the
  • getCheckpointConfig
    Gets the checkpoint config, which defines values like checkpoint interval, delay between checkpoints
  • fromCollection,
  • getCheckpointConfig,
  • getParallelism,
  • getStreamGraph,
  • setRestartStrategy,
  • socketTextStream,
  • readTextFile,
  • generateSequence,
  • clean,
  • getStreamTimeCharacteristic

Popular in Java

  • Updating database using SQL prepared statement
  • compareTo (BigDecimal)
  • runOnUiThread (Activity)
  • getSharedPreferences (Context)
  • FlowLayout (java.awt)
    A flow layout arranges components in a left-to-right flow, much like lines of text in a paragraph. F
  • Menu (java.awt)
  • MessageDigest (java.security)
    Uses a one-way hash function to turn an arbitrary number of bytes into a fixed-length byte sequence.
  • LinkedList (java.util)
    Doubly-linked list implementation of the List and Dequeinterfaces. Implements all optional list oper
  • ImageIO (javax.imageio)
  • Runner (org.openjdk.jmh.runner)
  • Top 12 Jupyter Notebook extensions
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now