congrats Icon
New! Announcing our next generation AI code completions
Read here
Tabnine Logo
StreamExecutionEnvironment.createInput
Code IndexAdd Tabnine to your IDE (free)

How to use
createInput
method
in
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment

Best Java code snippets using org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createInput (Showing top 15 results out of 315)

origin: apache/flink

/**
 * Generic method to create an input data stream with {@link org.apache.flink.api.common.io.InputFormat}.
 *
 * <p>Since all data streams need specific information about their types, this method needs to determine the
 * type of the data produced by the input format. It will attempt to determine the data type by reflection,
 * unless the input format implements the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable} interface.
 * In the latter case, this method will invoke the
 * {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable#getProducedType()} method to determine data
 * type produced by the input format.
 *
 * <p><b>NOTES ON CHECKPOINTING: </b> In the case of a {@link FileInputFormat}, the source
 * (which executes the {@link ContinuousFileMonitoringFunction}) monitors the path, creates the
 * {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards
 * them to the downstream {@link ContinuousFileReaderOperator} to read the actual data, and exits,
 * without waiting for the readers to finish reading. This implies that no more checkpoint
 * barriers are going to be forwarded after the source exits, thus having no checkpoints.
 *
 * @param inputFormat
 *         The input format used to create the data stream
 * @param <OUT>
 *         The type of the returned data stream
 * @return The data stream that represents the data created by the input format
 */
@PublicEvolving
public <OUT> DataStreamSource<OUT> createInput(InputFormat<OUT, ?> inputFormat) {
  return createInput(inputFormat, TypeExtractor.getInputFormatTypes(inputFormat));
}
origin: com.alibaba.blink/flink-examples-streaming

private static SingleOutputStreamOperator<Order> getOrdersDataStream(StreamExecutionEnvironment env, String ordersPath, boolean useSourceV2) {
  final CsvReader csvReader =
    new CsvReader(ordersPath, ExecutionEnvironment.getExecutionEnvironment())
        .fieldDelimiter("|")
        .includeFields("110010010");
  final TupleCsvInputFormat<Order> inputFormat = csvReader.generateTupleCsvInputFormat(Order.class);
  if (useSourceV2) {
    return env.createInputV2(inputFormat, inputFormat.getTupleTypeInfo(), "Order source v2");
  } else {
    return env.createInput(inputFormat, inputFormat.getTupleTypeInfo(), "Order source v1");
  }
}
origin: apache/flink

      FileProcessingMode.PROCESS_ONCE, -1);
} else {
  source = createInput(inputFormat, typeInfo, "Custom Source");
origin: com.alibaba.blink/flink-examples-streaming

private static SingleOutputStreamOperator<Lineitem> getLineitemDataStream(StreamExecutionEnvironment env, String lineitemPath, boolean useSourceV2) {
  final CsvReader csvReader =
    new CsvReader(lineitemPath, ExecutionEnvironment.getExecutionEnvironment())
      .fieldDelimiter("|")
      .includeFields("1000011000100000");
  final TupleCsvInputFormat<Lineitem> inputFormat = csvReader.generateTupleCsvInputFormat(Lineitem.class);
  if (useSourceV2) {
    return env.createInputV2(inputFormat, inputFormat.getTupleTypeInfo(), "Lineitem source v2");
  } else {
    return env.createInput(inputFormat, inputFormat.getTupleTypeInfo(), "Lineitem source v1");
  }
}
origin: com.alibaba.blink/flink-connector-hive

@Override
public DataStream<BaseRow> getBoundedStream(StreamExecutionEnvironment streamEnv) {
  try {
    List<Partition> partitionList;
    if (null == prunedPartitions || prunedPartitions.size() == 0){
      partitionList = allPartitions;
    } else {
      partitionList = prunedPartitions;
    }
    return streamEnv.createInput(
        new HiveTableInputFormat.Builder(rowTypeInfo, jobConf, dbName, tableName, isPartitionTable,
                        partitionColNames, partitionList).build()).name(explainSource());
  } catch (Exception e){
    logger.error("Can not normally create hiveTableInputFormat !", e);
    throw new RuntimeException(e);
  }
}
origin: com.alibaba.blink/flink-examples-streaming

private static SingleOutputStreamOperator<Customer> getCustomerDataStream(StreamExecutionEnvironment env, String customerPath, boolean useSourceV2) {
  final CsvReader csvReader =
    new CsvReader(customerPath, ExecutionEnvironment.getExecutionEnvironment())
        .fieldDelimiter("|")
        .includeFields("10000010");
  final TupleCsvInputFormat<Customer> inputFormat = csvReader.generateTupleCsvInputFormat(Customer.class);
  if (useSourceV2) {
    return env.createInputV2(inputFormat, inputFormat.getTupleTypeInfo(), "Custom source v2");
  } else {
    return env.createInput(inputFormat, inputFormat.getTupleTypeInfo(), "Custom source v1");
  }
}
origin: org.apache.flink/flink-streaming-java_2.10

/**
 * Generic method to create an input data stream with {@link org.apache.flink.api.common.io.InputFormat}.
 *
 * <p>Since all data streams need specific information about their types, this method needs to determine the
 * type of the data produced by the input format. It will attempt to determine the data type by reflection,
 * unless the input format implements the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable} interface.
 * In the latter case, this method will invoke the
 * {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable#getProducedType()} method to determine data
 * type produced by the input format.
 *
 * <p><b>NOTES ON CHECKPOINTING: </b> In the case of a {@link FileInputFormat}, the source
 * (which executes the {@link ContinuousFileMonitoringFunction}) monitors the path, creates the
 * {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards
 * them to the downstream {@link ContinuousFileReaderOperator} to read the actual data, and exits,
 * without waiting for the readers to finish reading. This implies that no more checkpoint
 * barriers are going to be forwarded after the source exits, thus having no checkpoints.
 *
 * @param inputFormat
 *         The input format used to create the data stream
 * @param <OUT>
 *         The type of the returned data stream
 * @return The data stream that represents the data created by the input format
 */
@PublicEvolving
public <OUT> DataStreamSource<OUT> createInput(InputFormat<OUT, ?> inputFormat) {
  return createInput(inputFormat, TypeExtractor.getInputFormatTypes(inputFormat));
}
origin: org.apache.flink/flink-streaming-java_2.11

/**
 * Generic method to create an input data stream with {@link org.apache.flink.api.common.io.InputFormat}.
 *
 * <p>Since all data streams need specific information about their types, this method needs to determine the
 * type of the data produced by the input format. It will attempt to determine the data type by reflection,
 * unless the input format implements the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable} interface.
 * In the latter case, this method will invoke the
 * {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable#getProducedType()} method to determine data
 * type produced by the input format.
 *
 * <p><b>NOTES ON CHECKPOINTING: </b> In the case of a {@link FileInputFormat}, the source
 * (which executes the {@link ContinuousFileMonitoringFunction}) monitors the path, creates the
 * {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards
 * them to the downstream {@link ContinuousFileReaderOperator} to read the actual data, and exits,
 * without waiting for the readers to finish reading. This implies that no more checkpoint
 * barriers are going to be forwarded after the source exits, thus having no checkpoints.
 *
 * @param inputFormat
 *         The input format used to create the data stream
 * @param <OUT>
 *         The type of the returned data stream
 * @return The data stream that represents the data created by the input format
 */
@PublicEvolving
public <OUT> DataStreamSource<OUT> createInput(InputFormat<OUT, ?> inputFormat) {
  return createInput(inputFormat, TypeExtractor.getInputFormatTypes(inputFormat));
}
origin: org.apache.flink/flink-streaming-java

/**
 * Generic method to create an input data stream with {@link org.apache.flink.api.common.io.InputFormat}.
 *
 * <p>Since all data streams need specific information about their types, this method needs to determine the
 * type of the data produced by the input format. It will attempt to determine the data type by reflection,
 * unless the input format implements the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable} interface.
 * In the latter case, this method will invoke the
 * {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable#getProducedType()} method to determine data
 * type produced by the input format.
 *
 * <p><b>NOTES ON CHECKPOINTING: </b> In the case of a {@link FileInputFormat}, the source
 * (which executes the {@link ContinuousFileMonitoringFunction}) monitors the path, creates the
 * {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards
 * them to the downstream {@link ContinuousFileReaderOperator} to read the actual data, and exits,
 * without waiting for the readers to finish reading. This implies that no more checkpoint
 * barriers are going to be forwarded after the source exits, thus having no checkpoints.
 *
 * @param inputFormat
 *         The input format used to create the data stream
 * @param <OUT>
 *         The type of the returned data stream
 * @return The data stream that represents the data created by the input format
 */
@PublicEvolving
public <OUT> DataStreamSource<OUT> createInput(InputFormat<OUT, ?> inputFormat) {
  return createInput(inputFormat, TypeExtractor.getInputFormatTypes(inputFormat));
}
origin: DTStack/flinkx

/**
 * Generic method to create an input data stream with {@link org.apache.flink.api.common.io.InputFormat}.
 *
 * <p>Since all data streams need specific information about their types, this method needs to determine the
 * type of the data produced by the input format. It will attempt to determine the data type by reflection,
 * unless the input format implements the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable} interface.
 * In the latter case, this method will invoke the
 * {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable#getProducedType()} method to determine data
 * type produced by the input format.
 *
 * <p><b>NOTES ON CHECKPOINTING: </b> In the case of a {@link FileInputFormat}, the source
 * (which executes the {@link ContinuousFileMonitoringFunction}) monitors the path, creates the
 * {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards
 * them to the downstream {@link ContinuousFileReaderOperator} to read the actual data, and exits,
 * without waiting for the readers to finish reading. This implies that no more checkpoint
 * barriers are going to be forwarded after the source exits, thus having no checkpoints.
 *
 * @param inputFormat
 *         The input format used to create the data stream
 * @param <OUT>
 *         The type of the returned data stream
 * @return The data stream that represents the data created by the input format
 */
@PublicEvolving
public <OUT> DataStreamSource<OUT> createInput(InputFormat<OUT, ?> inputFormat) {
  return createInput(inputFormat, TypeExtractor.getInputFormatTypes(inputFormat));
}
origin: org.apache.flink/flink-streaming-java_2.10

      FileProcessingMode.PROCESS_ONCE, -1);
} else {
  source = createInput(inputFormat, typeInfo, "Custom Source");
origin: org.apache.flink/flink-streaming-java_2.11

      FileProcessingMode.PROCESS_ONCE, -1);
} else {
  source = createInput(inputFormat, typeInfo, "Custom Source");
origin: DTStack/flinkx

      FileProcessingMode.PROCESS_ONCE, -1);
} else {
  source = createInput(inputFormat, typeInfo, "Custom Source");
origin: org.apache.flink/flink-streaming-java

      FileProcessingMode.PROCESS_ONCE, -1);
} else {
  source = createInput(inputFormat, typeInfo, "Custom Source");
origin: wuchong/my-flink-project

.createInput(csvInput, pojoType)
org.apache.flink.streaming.api.environmentStreamExecutionEnvironmentcreateInput

Javadoc

Generic method to create an input data stream with org.apache.flink.api.common.io.InputFormat.

Since all data streams need specific information about their types, this method needs to determine the type of the data produced by the input format. It will attempt to determine the data type by reflection, unless the input format implements the org.apache.flink.api.java.typeutils.ResultTypeQueryable interface. In the latter case, this method will invoke the org.apache.flink.api.java.typeutils.ResultTypeQueryable#getProducedType() method to determine data type produced by the input format.

NOTES ON CHECKPOINTING: In the case of a FileInputFormat, the source (which executes the ContinuousFileMonitoringFunction) monitors the path, creates the org.apache.flink.core.fs.FileInputSplit to be processed, forwards them to the downstream ContinuousFileReaderOperator to read the actual data, and exits, without waiting for the readers to finish reading. This implies that no more checkpoint barriers are going to be forwarded after the source exits, thus having no checkpoints.

Popular methods of StreamExecutionEnvironment

  • execute
  • getExecutionEnvironment
    Creates an execution environment that represents the context in which the program is currently execu
  • addSource
    Ads a data source with a custom type information thus opening a DataStream. Only in very special cas
  • getConfig
    Gets the config object.
  • enableCheckpointing
    Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be
  • setStreamTimeCharacteristic
    Sets the time characteristic for all streams create from this environment, e.g., processing time, ev
  • setParallelism
    Sets the parallelism for operations executed through this environment. Setting a parallelism of x he
  • fromElements
    Creates a new data stream that contains the given elements. The elements must all be of the same typ
  • setStateBackend
    Sets the state backend that describes how to store and checkpoint operator state. It defines both wh
  • createLocalEnvironment
    Creates a LocalStreamEnvironment. The local execution environment will run the program in a multi-th
  • fromCollection
    Creates a data stream from the given iterator.Because the iterator will remain unmodified until the
  • getCheckpointConfig
    Gets the checkpoint config, which defines values like checkpoint interval, delay between checkpoints
  • fromCollection,
  • getCheckpointConfig,
  • getParallelism,
  • getStreamGraph,
  • setRestartStrategy,
  • socketTextStream,
  • readTextFile,
  • generateSequence,
  • clean,
  • getStreamTimeCharacteristic

Popular in Java

  • Making http post requests using okhttp
  • setContentView (Activity)
  • setRequestProperty (URLConnection)
  • startActivity (Activity)
  • GridBagLayout (java.awt)
    The GridBagLayout class is a flexible layout manager that aligns components vertically and horizonta
  • BufferedImage (java.awt.image)
    The BufferedImage subclass describes an java.awt.Image with an accessible buffer of image data. All
  • URL (java.net)
    A Uniform Resource Locator that identifies the location of an Internet resource as specified by RFC
  • Time (java.sql)
    Java representation of an SQL TIME value. Provides utilities to format and parse the time's represen
  • Executor (java.util.concurrent)
    An object that executes submitted Runnable tasks. This interface provides a way of decoupling task s
  • IsNull (org.hamcrest.core)
    Is the value null?
  • Top 17 PhpStorm Plugins
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimAtomGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyStudentsTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now