Tabnine Logo
SourceProcessors
Code IndexAdd Tabnine to your IDE (free)

How to use
SourceProcessors
in
com.hazelcast.jet.core.processor

Best Java code snippets using com.hazelcast.jet.core.processor.SourceProcessors (Showing top 18 results out of 315)

origin: hazelcast/hazelcast-jet

/**
 * Convenience for {@link #map(String, Predicate, Projection)}
 * which uses a {@link DistributedFunction} as the projection function.
 */
@Nonnull
public static <T, K, V> BatchSource<T> map(
    @Nonnull String mapName,
    @Nonnull Predicate<? super K, ? super V> predicate,
    @Nonnull DistributedFunction<? super Map.Entry<K, V>, ? extends T> projectionFn
) {
  return batchFromProcessor("mapSource(" + mapName + ')', readMapP(mapName, predicate, projectionFn));
}
origin: hazelcast/hazelcast-jet

/**
 * Returns a supplier of processors for
 * {@link Sources#mapJournal(String, JournalInitialPosition)} )}.
 */
@Nonnull
public static <K, V> ProcessorMetaSupplier streamMapP(
    @Nonnull String mapName,
    @Nonnull JournalInitialPosition initialPos,
    @Nonnull EventTimePolicy<? super Entry<K, V>> eventTimePolicy
) {
  return streamMapP(mapName, mapPutEvents(), mapEventToEntry(), initialPos, eventTimePolicy);
}
origin: hazelcast/hazelcast-jet

/**
 * Returns a source that fetches entries from a Hazelcast {@code ICache}
 * with the given name and emits them as {@code Map.Entry}. It leverages
 * data locality by making each of the underlying processors fetch only
 * those entries that are stored on the member where it is running.
 * <p>
 * The source does not save any state to snapshot. If the job is restarted,
 * it will re-emit all entries.
 * <p>
 * If the {@code ICache} is modified while being read, or if there is a
 * cluster topology change (triggering data migration), the source may
 * miss and/or duplicate some entries.
 * <p>
 * The default local parallelism for this processor is 2 (or 1 if just 1
 * CPU is available).
 */
@Nonnull
public static <K, V> BatchSource<Entry<K, V>> cache(@Nonnull String cacheName) {
  return batchFromProcessor("cacheSource(" + cacheName + ')', readCacheP(cacheName));
}
origin: hazelcast/hazelcast-jet

  /**
   * Convenience for {@link Sources#jdbc(DistributedSupplier,
   * ToResultSetFunction, DistributedFunction)}.
   * A non-distributed, single-worker source which fetches the whole resultSet
   * with a single query on single member.
   * <p>
   * This method executes exactly one query in the target database. If the
   * underlying table is modified while being read, the behavior depends on
   * the configured transaction isolation level in the target database. Refer
   * to the documentation for the target database system.
   * <p>
   * Example: <pre>{@code
   *     p.drawFrom(Sources.jdbc(
   *         DB_CONNECTION_URL,
   *         "select ID, NAME from PERSON",
   *         resultSet -> new Person(resultSet.getInt(1), resultSet.getString(2))))
   * }</pre>
   */
  public static <T> BatchSource<T> jdbc(
      @Nonnull String connectionURL,
      @Nonnull String query,
      @Nonnull DistributedFunction<? super ResultSet, ? extends T> createOutputFn
  ) {
    return batchFromProcessor("jdbcSource",
        SourceProcessors.readJdbcP(connectionURL, query, createOutputFn));
  }
}
origin: hazelcast/hazelcast-jet

/**
 * Returns a source that emits items retrieved from a Hazelcast {@code
 * IList}. All elements are emitted on a single member &mdash; the one
 * where the entire list is stored by the IMDG.
 * <p>
 * If the {@code IList} is modified while being read, the source may miss
 * and/or duplicate some entries.
 * <p>
 * The source does not save any state to snapshot. If the job is restarted,
 * it will re-emit all entries.
 * <p>
 * The default local parallelism for this processor is 1.
 */
@Nonnull
public static <T> BatchSource<T> list(@Nonnull String listName) {
  return batchFromProcessor("listSource(" + listName + ')', readListP(listName));
}
origin: hazelcast/hazelcast-jet

/**
 * Builds a custom file {@link BatchSource} with supplied components and the
 * output function {@code mapOutputFn}.
 * <p>
 * The source does not save any state to snapshot. If the job is restarted,
 * it will re-emit all entries.
 * <p>
 * Any {@code IOException} will cause the job to fail. The files must not
 * change while being read; if they do, the behavior is unspecified.
 * <p>
 * The default local parallelism for this processor is 2 (or 1 if just 1
 * CPU is available).
 *
 * @param mapOutputFn the function which creates output object from each
 *                    line. Gets the filename and line as parameters
 * @param <T> the type of the items the source emits
 */
public <T> BatchSource<T> build(DistributedBiFunction<String, String, ? extends T> mapOutputFn) {
  return batchFromProcessor("filesSource(" + new File(directory, glob) + ')',
      SourceProcessors.readFilesP(directory, charset, glob, sharedFileSystem, mapOutputFn));
}
origin: hazelcast/hazelcast-jet

) {
  return batchFromProcessor("jdbcSource",
      SourceProcessors.readJdbcP(connectionSupplier, resultSetFn, createOutputFn));
origin: hazelcast/hazelcast-jet

/**
 * Returns a source that fetches entries from a local Hazelcast {@code IMap}
 * with the specified name and emits them as {@code Map.Entry}. It leverages
 * data locality by making each of the underlying processors fetch only those
 * entries that are stored on the member where it is running.
 * <p>
 * The source does not save any state to snapshot. If the job is restarted,
 * it will re-emit all entries.
 * <p>
 * If the {@code IMap} is modified while being read, or if there is a
 * cluster topology change (triggering data migration), the source may
 * miss and/or duplicate some entries.
 * <p>
 * The default local parallelism for this processor is 2 (or 1 if just 1
 * CPU is available).
 */
@Nonnull
public static <K, V> BatchSource<Entry<K, V>> map(@Nonnull String mapName) {
  return batchFromProcessor("mapSource(" + mapName + ')', readMapP(mapName));
}
origin: hazelcast/hazelcast-jet

) {
  return streamFromProcessorWithWatermarks("mapJournalSource(" + mapName + ')',
      w -> streamMapP(mapName, predicateFn, projectionFn, initialPos, w), false);
origin: hazelcast/hazelcast-jet

    @Nonnull Projection<? super Entry<K, V>, ? extends T> projection
) {
  return batchFromProcessor("mapSource(" + mapName + ')', readMapP(mapName, predicate, projection));
origin: hazelcast/hazelcast-jet-code-samples

SourceProcessors.<Trade, Long, Trade>streamMapP(TRADES_MAP_NAME, DistributedPredicate.alwaysTrue(),
    EventJournalMapEvent::getNewValue,
    JournalInitialPosition.START_FROM_OLDEST,
origin: hazelcast/hazelcast-jet-code-samples

  public static void main(String[] args) throws Exception {
    System.setProperty("hazelcast.logging.type", "log4j");
    Jet.newJetInstance();
    JetInstance jet = Jet.newJetInstance();
    try {

      IMapJet<Object, Object> map = jet.getMap("map");
      range(0, COUNT).parallel().forEach(i -> map.put("key-" + i, i));

      DAG dag = new DAG();

      Vertex source = dag.newVertex("map-source", SourceProcessors.readMapP(map.getName()));
      Vertex sink = dag.newVertex("file-sink", new WriteFilePSupplier(OUTPUT_FOLDER));
      dag.edge(between(source, sink));

      jet.newJob(dag).join();
      System.out.println("\nHazelcast IMap dumped to folder " + new File(OUTPUT_FOLDER).getAbsolutePath());
    } finally {
      Jet.shutdownAll();
    }
  }
}
origin: hazelcast/hazelcast-jet-code-samples

SourceProcessors.<Trade, Long, Trade>streamMapP(TRADES_MAP_NAME, DistributedPredicate.alwaysTrue(),
    EventJournalMapEvent::getNewValue, JournalInitialPosition.START_FROM_OLDEST,
    wmGenParams(
origin: hazelcast/hazelcast-jet

private void rewriteDagWithSnapshotRestore(DAG dag, long snapshotId, String mapName) {
  IMap<Object, Object> snapshotMap = nodeEngine.getHazelcastInstance().getMap(mapName);
  snapshotId = SnapshotValidator.validateSnapshot(snapshotId, jobIdString(), snapshotMap);
  logger.info("State of " + jobIdString() + " will be restored from snapshot " + snapshotId + ", map=" + mapName);
  List<Vertex> originalVertices = new ArrayList<>();
  dag.iterator().forEachRemaining(originalVertices::add);
  Map<String, Integer> vertexToOrdinal = new HashMap<>();
  Vertex readSnapshotVertex = dag.newVertex(SNAPSHOT_VERTEX_PREFIX + "read",
      readMapP(mapName));
  long finalSnapshotId = snapshotId;
  Vertex explodeVertex = dag.newVertex(SNAPSHOT_VERTEX_PREFIX + "explode",
      () -> new ExplodeSnapshotP(vertexToOrdinal, finalSnapshotId));
  dag.edge(between(readSnapshotVertex, explodeVertex).isolated());
  int index = 0;
  // add the edges
  for (Vertex userVertex : originalVertices) {
    vertexToOrdinal.put(userVertex.getName(), index);
    int destOrdinal = dag.getInboundEdges(userVertex.getName()).size();
    dag.edge(new SnapshotRestoreEdge(explodeVertex, index, userVertex, destOrdinal));
    index++;
  }
}
origin: hazelcast/hazelcast-jet

  public static CompletableFuture<Void> copyMapUsingJob(JetInstance instance, int queueSize,
                             String sourceMap, String targetMap) {
    DAG dag = new DAG();
    Vertex source = dag.newVertex("readMap(" + sourceMap + ')', readMapP(sourceMap));
    Vertex sink = dag.newVertex("writeMap(" + targetMap + ')', writeMapP(targetMap));
    dag.edge(between(source, sink).setConfig(new EdgeConfig().setQueueSize(queueSize)));
    JobConfig jobConfig = new JobConfig()
        .setName("copy-" + sourceMap + "-to-" + targetMap);
    return instance.newJob(dag, jobConfig).getFuture();
  }
}
origin: hazelcast/hazelcast-jet-code-samples

Vertex source = dag.newVertex("source", readMapP(DOCID_NAME));
origin: hazelcast/hazelcast-jet-code-samples

Vertex docSource = dag.newVertex("doc-source", readMapP(DOCID_NAME));
origin: hazelcast/hazelcast-jet-code-samples

Vertex readTickerInfoMap = dag.newVertex("readTickerInfoMap", readMapP(TICKER_INFO_MAP_NAME));
Vertex collectToMap = dag.newVertex("collectToMap",
    Processors.aggregateP(AggregateOperations.toMap(entryKey(), entryValue())));
com.hazelcast.jet.core.processorSourceProcessors

Javadoc

Static utility class with factories of source processors (the DAG entry points). For other kinds for a vertices refer to the com.hazelcast.jet.core.processor.

Most used methods

  • readMapP
    Returns a supplier of processors for Sources#map(String,Predicate,Projection).
  • streamMapP
    Returns a supplier of processors for Sources#mapJournal(String,JournalInitialPosition) )}.
  • readCacheP
    Returns a supplier of processors for Sources#cache(String).
  • readFilesP
    Returns a supplier of processors for Sources#filesBuilder. See FileSourceBuilder#build for more deta
  • readJdbcP
    Returns a supplier of processors for Sources#jdbc(String,String,DistributedFunction).
  • readListP
    Returns a supplier of processors for Sources#list(String).
  • readRemoteCacheP
    Returns a supplier of processors for Sources#remoteCache(String,ClientConfig).
  • readRemoteListP
    Returns a supplier of processors for Sources#remoteList(String,ClientConfig).
  • readRemoteMapP
    Returns a supplier of processors for Sources#remoteMap(String,ClientConfig,Predicate,Projection).
  • streamCacheP
    Returns a supplier of processors for Sources#cacheJournal(String,JournalInitialPosition).
  • streamFilesP
    Returns a supplier of processors for Sources#filesBuilder. See FileSourceBuilder#buildWatcher for mo
  • streamJmsQueueP
    Returns a supplier of processors for Sources#jmsQueueBuilder.
  • streamFilesP,
  • streamJmsQueueP,
  • streamJmsTopicP,
  • streamRemoteCacheP,
  • streamRemoteMapP,
  • streamSocketP,
  • toProjection

Popular in Java

  • Updating database using SQL prepared statement
  • getSupportFragmentManager (FragmentActivity)
  • getContentResolver (Context)
  • setContentView (Activity)
  • Point (java.awt)
    A point representing a location in (x,y) coordinate space, specified in integer precision.
  • Thread (java.lang)
    A thread is a thread of execution in a program. The Java Virtual Machine allows an application to ha
  • DateFormat (java.text)
    Formats or parses dates and times.This class provides factories for obtaining instances configured f
  • SimpleDateFormat (java.text)
    Formats and parses dates in a locale-sensitive manner. Formatting turns a Date into a String, and pa
  • Queue (java.util)
    A collection designed for holding elements prior to processing. Besides basic java.util.Collection o
  • LogFactory (org.apache.commons.logging)
    Factory for creating Log instances, with discovery and configuration features similar to that employ
  • Best IntelliJ plugins
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now