Tabnine Logo
ClusterDescriptor
Code IndexAdd Tabnine to your IDE (free)

How to use
ClusterDescriptor
in
org.apache.flink.client.deployment

Best Java code snippets using org.apache.flink.client.deployment.ClusterDescriptor (Showing top 11 results out of 315)

origin: apache/flink

  client = clusterDescriptor.deployJobCluster(
    clusterSpecification,
    jobGraph,
  final Thread shutdownHook;
  if (clusterId != null) {
    client = clusterDescriptor.retrieve(clusterId);
    shutdownHook = null;
  } else {
    client = clusterDescriptor.deploySessionCluster(clusterSpecification);
  clusterDescriptor.close();
} catch (Exception e) {
  LOG.info("Could not properly close the cluster descriptor.", e);
origin: apache/flink

} else {
  try {
    final ClusterClient<T> clusterClient = clusterDescriptor.retrieve(clusterId);
      clusterDescriptor.close();
    } catch (Exception e) {
      LOG.info("Could not properly close the cluster descriptor.", e);
origin: apache/flink

try {
  clusterClient = clusterDescriptor.retrieve(clusterId);
  String webInterfaceUrl;
origin: apache/flink

private <T> void deployJobOnNewCluster(
    ClusterDescriptor<T> clusterDescriptor,
    JobGraph jobGraph,
    Result<T> result,
    ClassLoader classLoader) throws Exception {
  ClusterClient<T> clusterClient = null;
  try {
    // deploy job cluster with job attached
    clusterClient = clusterDescriptor.deployJobCluster(context.getClusterSpec(), jobGraph, false);
    // save information about the new cluster
    result.setClusterInformation(clusterClient.getClusterId(), clusterClient.getWebInterfaceURL());
    // get result
    if (awaitJobResult) {
      // we need to hard cast for now
      final JobExecutionResult jobResult = ((RestClusterClient<T>) clusterClient)
          .requestJobResult(jobGraph.getJobID())
          .get()
          .toJobExecutionResult(context.getClassLoader()); // throws exception if job fails
      executionResultBucket.add(jobResult);
    }
  } finally {
    try {
      if (clusterClient != null) {
        clusterClient.shutdown();
      }
    } catch (Exception e) {
      // ignore
    }
  }
}
origin: org.apache.flink/flink-clients_2.11

} else {
  try {
    final ClusterClient<T> clusterClient = clusterDescriptor.retrieve(clusterId);
      clusterDescriptor.close();
    } catch (Exception e) {
      LOG.info("Could not properly close the cluster descriptor.", e);
origin: apache/flink

try {
  clusterClient = clusterDescriptor.retrieve(context.getClusterId());
  try {
    clusterClient.cancel(new JobID(StringUtils.hexStringToByte(resultId)));
origin: com.alibaba.blink/flink-clients

  client = clusterDescriptor.deployJobCluster(
    clusterSpecification,
    jobGraph,
    client = clusterDescriptor.retrieve(clusterId);
  } else {
    client = clusterDescriptor.deploySessionCluster(clusterSpecification);
  clusterDescriptor.close();
} catch (Exception e) {
  LOG.info("Could not properly close the cluster descriptor.", e);
origin: com.alibaba.blink/flink-clients

} else {
  try {
    final ClusterClient<T> clusterClient = clusterDescriptor.retrieve(clusterId);
      clusterDescriptor.close();
    } catch (Exception e) {
      LOG.info("Could not properly close the cluster descriptor.", e);
origin: apache/flink

/**
 * Tests that command line options override the configuration settings.
 */
@Test
public void testManualConfigurationOverride() throws Exception {
  final String localhost = "localhost";
  final int port = 1234;
  final Configuration configuration = getConfiguration();
  configuration.setString(JobManagerOptions.ADDRESS, localhost);
  configuration.setInteger(JobManagerOptions.PORT, port);
  @SuppressWarnings("unchecked")
  final AbstractCustomCommandLine<StandaloneClusterId> defaultCLI =
    (AbstractCustomCommandLine<StandaloneClusterId>) getCli(configuration);
  final String manualHostname = "123.123.123.123";
  final int manualPort = 4321;
  final String[] args = {"-m", manualHostname + ':' + manualPort};
  CommandLine commandLine = defaultCLI.parseCommandLineOptions(args, false);
  final ClusterDescriptor<StandaloneClusterId> clusterDescriptor =
    defaultCLI.createClusterDescriptor(commandLine);
  final ClusterClient<?> clusterClient = clusterDescriptor.retrieve(defaultCLI.getClusterId(commandLine));
  final LeaderConnectionInfo clusterConnectionInfo = clusterClient.getClusterConnectionInfo();
  assertThat(clusterConnectionInfo.getHostname(), Matchers.equalTo(manualHostname));
  assertThat(clusterConnectionInfo.getPort(), Matchers.equalTo(manualPort));
}
origin: org.apache.flink/flink-clients_2.11

  client = clusterDescriptor.deployJobCluster(
    clusterSpecification,
    jobGraph,
  final Thread shutdownHook;
  if (clusterId != null) {
    client = clusterDescriptor.retrieve(clusterId);
    shutdownHook = null;
  } else {
    client = clusterDescriptor.deploySessionCluster(clusterSpecification);
  clusterDescriptor.close();
} catch (Exception e) {
  LOG.info("Could not properly close the cluster descriptor.", e);
origin: apache/flink

/**
 * Tests that the configuration is properly passed via the DefaultCLI to the
 * created ClusterDescriptor.
 */
@Test
public void testConfigurationPassing() throws Exception {
  final Configuration configuration = getConfiguration();
  final String localhost = "localhost";
  final int port = 1234;
  configuration.setString(JobManagerOptions.ADDRESS, localhost);
  configuration.setInteger(JobManagerOptions.PORT, port);
  @SuppressWarnings("unchecked")
  final AbstractCustomCommandLine<StandaloneClusterId> defaultCLI =
    (AbstractCustomCommandLine<StandaloneClusterId>) getCli(configuration);
  final String[] args = {};
  CommandLine commandLine = defaultCLI.parseCommandLineOptions(args, false);
  final ClusterDescriptor<StandaloneClusterId> clusterDescriptor =
    defaultCLI.createClusterDescriptor(commandLine);
  final ClusterClient<?> clusterClient = clusterDescriptor.retrieve(defaultCLI.getClusterId(commandLine));
  final LeaderConnectionInfo clusterConnectionInfo = clusterClient.getClusterConnectionInfo();
  assertThat(clusterConnectionInfo.getHostname(), Matchers.equalTo(localhost));
  assertThat(clusterConnectionInfo.getPort(), Matchers.equalTo(port));
}
org.apache.flink.client.deploymentClusterDescriptor

Javadoc

A descriptor to deploy a cluster (e.g. Yarn or Mesos) and return a Client for Cluster communication.

Most used methods

  • retrieve
    Retrieves an existing Flink Cluster.
  • deployJobCluster
    Deploys a per-job cluster with the given job on the cluster.
  • close
  • deploySessionCluster
    Triggers deployment of a cluster.

Popular in Java

  • Finding current android device location
  • onRequestPermissionsResult (Fragment)
  • requestLocationUpdates (LocationManager)
  • getSystemService (Context)
  • URLConnection (java.net)
    A connection to a URL for reading or writing. For HTTP connections, see HttpURLConnection for docume
  • URLEncoder (java.net)
    This class is used to encode a string using the format required by application/x-www-form-urlencoded
  • Date (java.sql)
    A class which can consume and produce dates in SQL Date format. Dates are represented in SQL as yyyy
  • Arrays (java.util)
    This class contains various methods for manipulating arrays (such as sorting and searching). This cl
  • Stack (java.util)
    Stack is a Last-In/First-Out(LIFO) data structure which represents a stack of objects. It enables u
  • AtomicInteger (java.util.concurrent.atomic)
    An int value that may be updated atomically. See the java.util.concurrent.atomic package specificati
  • Best IntelliJ plugins
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now