Tabnine Logo
ComputationState
Code IndexAdd Tabnine to your IDE (free)

How to use
ComputationState
in
org.apache.flink.cep.nfa

Best Java code snippets using org.apache.flink.cep.nfa.ComputationState (Showing top 20 results out of 315)

origin: apache/flink

  public static ComputationState createState(
      final String currentState,
      final NodeId previousEntry,
      final DeweyNumber version,
      final long startTimestamp,
      final EventId startEventID) {
    return new ComputationState(currentState, previousEntry, version, startEventID, startTimestamp);
  }
}
origin: apache/flink

          outgoingEdges.getTotalIgnoreBranches(),
          outgoingEdges.getTotalTakeBranches());
        version = computationState.getVersion().increase(toIncrease);
      } else {
        version = computationState.getVersion()
          .increase(totalTakeToSkip + ignoreBranchesToVisit)
          .addStage();
        resultingComputationStates,
        edge.getTargetState(),
        computationState.getPreviousBufferEntry(),
        version,
        computationState.getStartTimestamp(),
        computationState.getStartEventID()
      );
    final NodeId previousEntry = computationState.getPreviousBufferEntry();
    final DeweyNumber currentVersion = computationState.getVersion().increase(takeBranchesToVisit);
    final DeweyNumber nextVersion = new DeweyNumber(currentVersion).addStage();
    takeBranchesToVisit--;
      startEventId = event.getEventId();
    } else {
      startTimestamp = computationState.getStartTimestamp();
      startEventId = computationState.getStartEventID();
DeweyNumber startVersion = computationState.getVersion().increase(totalBranches);
origin: apache/flink

public static ComputationState createStartState(final String state, final DeweyNumber version) {
  return createState(state, null, version, -1L, null);
}
origin: apache/flink

private void serializeSingleComputationState(
    ComputationState computationState,
    DataOutputView target) throws IOException {
  StringValue.writeString(computationState.getCurrentStateName(), target);
  nodeIdSerializer.serialize(computationState.getPreviousBufferEntry(), target);
  versionSerializer.serialize(computationState.getVersion(), target);
  target.writeLong(computationState.getStartTimestamp());
  serializeStartEvent(computationState.getStartEventID(), target);
}
origin: apache/flink

/**
 * Extracts all the sequences of events from the start to the given computation state. An event
 * sequence is returned as a map which contains the events and the names of the states to which
 * the events were mapped.
 *
 * @param sharedBufferAccessor The accessor to {@link SharedBuffer} from which to extract the matches
 * @param computationState The end computation state of the extracted event sequences
 * @return Collection of event sequences which end in the given computation state
 * @throws Exception Thrown if the system cannot access the state.
 */
private Map<String, List<EventId>> extractCurrentMatches(
    final SharedBufferAccessor<T> sharedBufferAccessor,
    final ComputationState computationState) throws Exception {
  if (computationState.getPreviousBufferEntry() == null) {
    return new HashMap<>();
  }
  List<Map<String, List<EventId>>> paths = sharedBufferAccessor.extractPatterns(
      computationState.getPreviousBufferEntry(),
      computationState.getVersion());
  if (paths.isEmpty()) {
    return new HashMap<>();
  }
  // for a given computation state, we cannot have more than one matching patterns.
  Preconditions.checkState(paths.size() == 1);
  return paths.get(0);
}
origin: apache/flink

/**
 * Prunes matches/partial matches based on the chosen strategy.
 *
 * @param matchesToPrune current partial matches
 * @param matchedResult  already completed matches
 * @param sharedBufferAccessor   accessor to corresponding shared buffer
 * @throws Exception thrown if could not access the state
 */
public void prune(
    Collection<ComputationState> matchesToPrune,
    Collection<Map<String, List<EventId>>> matchedResult,
    SharedBufferAccessor<?> sharedBufferAccessor) throws Exception {
  EventId pruningId = getPruningId(matchedResult);
  if (pruningId != null) {
    List<ComputationState> discardStates = new ArrayList<>();
    for (ComputationState computationState : matchesToPrune) {
      if (computationState.getStartEventID() != null &&
        shouldPrune(computationState.getStartEventID(), pruningId)) {
        sharedBufferAccessor.releaseNode(computationState.getPreviousBufferEntry());
        discardStates.add(computationState);
      }
    }
    matchesToPrune.removeAll(discardStates);
  }
}
origin: apache/flink

private State<T> getState(ComputationState state) {
  return states.get(state.getCurrentStateName());
}
origin: org.apache.flink/flink-cep_2.10

      return input != null && input.getState().getName().equals(BEGINNING_STATE_NAME);
  }).getState();
  if (!readState.isStartState()) {
    final String previousName = readState.getState().getName();
    final String currentName = Iterators.find(
      readState.getState().getStateTransitions().iterator(),
      new Predicate<StateTransition<T>>() {
        @Override
    computationStates.add(ComputationState.createState(
      this,
      convertedStates.get(currentName),
      previousState,
      readState.getEvent(),
      0,
      readState.getTimestamp(),
      readState.getVersion(),
      readState.getStartTimestamp()
    ));
computationStates.add(ComputationState.createStartState(
  this,
  convertedStates.get(startName),
origin: org.apache.flink/flink-cep_2.10

switch (edge.getAction()) {
  case IGNORE: {
    if (!computationState.isStartState()) {
      final DeweyNumber version;
      if (isEquivalentState(edge.getTargetState(), computationState.getState())) {
        version = computationState.getVersion().increase(toIncrease);
      } else {
        version = computationState.getVersion()
          .increase(totalTakeToSkip + ignoreBranchesToVisit)
          .addStage();
          resultingComputationStates,
          edge.getTargetState(),
          computationState.getPreviousState(),
          computationState.getEvent(),
          computationState.getCounter(),
          computationState.getTimestamp(),
          version,
          computationState.getStartTimestamp()
      );
    final State<T> nextState = edge.getTargetState();
    final State<T> currentState = edge.getSourceState();
    final State<T> previousState = computationState.getPreviousState();
    final T previousEvent = computationState.getEvent();
origin: apache/flink

  nfaState.getCompletedMatches().poll();
  List<Map<String, List<EventId>>> matchedResult =
    sharedBufferAccessor.extractPatterns(earliestMatch.getPreviousBufferEntry(), earliestMatch.getVersion());
  sharedBufferAccessor.releaseNode(earliestMatch.getPreviousBufferEntry());
  earliestMatch = nfaState.getCompletedMatches().peek();
nfaState.getPartialMatches().removeIf(pm -> pm.getStartEventID() != null && !partialMatches.contains(pm));
origin: apache/flink

} else if (!newComputationStates.iterator().next().equals(computationState)) {
  nfaState.setStateChanged();
    sharedBufferAccessor.releaseNode(newComputationState.getPreviousBufferEntry());
  } else {
    sharedBufferAccessor.releaseNode(state.getPreviousBufferEntry());
    sharedBufferAccessor.materializeMatch(
      sharedBufferAccessor.extractPatterns(
        match.getPreviousBufferEntry(),
        match.getVersion()).get(0)
    );
  sharedBufferAccessor.releaseNode(match.getPreviousBufferEntry());
origin: org.apache.flink/flink-cep_2.10

@Override
public void serialize(NFA<T> record, DataOutputView target) throws IOException {
  serializeStates(record.states, target);
  target.writeLong(record.windowTime);
  target.writeBoolean(record.handleTimeout);
  sharedBufferSerializer.serialize(record.eventSharedBuffer, target);
  target.writeInt(record.computationStates.size());
  StringSerializer stateNameSerializer = StringSerializer.INSTANCE;
  LongSerializer timestampSerializer = LongSerializer.INSTANCE;
  DeweyNumber.DeweyNumberSerializer versionSerializer = new DeweyNumber.DeweyNumberSerializer();
  for (ComputationState<T> computationState: record.computationStates) {
    stateNameSerializer.serialize(computationState.getState().getName(), target);
    stateNameSerializer.serialize(computationState.getPreviousState() == null
        ? null : computationState.getPreviousState().getName(), target);
    timestampSerializer.serialize(computationState.getTimestamp(), target);
    versionSerializer.serialize(computationState.getVersion(), target);
    timestampSerializer.serialize(computationState.getStartTimestamp(), target);
    target.writeInt(computationState.getCounter());
    if (computationState.getEvent() == null) {
      target.writeBoolean(false);
    } else {
      target.writeBoolean(true);
      eventSerializer.serialize(computationState.getEvent(), target);
    }
  }
}
origin: org.apache.flink/flink-cep_2.10

if (!computationState.isStartState() &&
  windowTime > 0L &&
  timestamp - computationState.getStartTimestamp() >= windowTime) {
      NFAStateNameHandler.getOriginalNameFromInternal(computationState.getPreviousState().getName()),
      computationState.getEvent(),
      computationState.getTimestamp(),
      computationState.getCounter());
  if (newComputationState.isFinalState()) {
            newComputationState.getPreviousState().getName()),
        newComputationState.getEvent(),
        newComputationState.getTimestamp(),
        computationState.getCounter());
  } else if (newComputationState.isStopState()) {
            newComputationState.getPreviousState().getName()),
        newComputationState.getEvent(),
        newComputationState.getTimestamp(),
        computationState.getCounter());
  } else {
    eventSharedBuffer.release(
        NFAStateNameHandler.getOriginalNameFromInternal(
            state.getPreviousState().getName()),
        state.getEvent(),
        state.getTimestamp(),
        state.getCounter());
origin: apache/flink

public static ComputationState createStartState(final String state) {
  return createStartState(state, new DeweyNumber(1));
}
origin: apache/flink

    sharedBufferAccessor,
    computationState));
  timeoutResult.add(Tuple2.of(timedOutPattern, computationState.getStartTimestamp() + windowTime));
sharedBufferAccessor.releaseNode(computationState.getPreviousBufferEntry());
origin: org.apache.flink/flink-cep_2.10

if (computationState.getPreviousState() == null) {
  return new HashMap<>();
        computationState.getPreviousState().getName()),
    computationState.getEvent(),
    computationState.getTimestamp(),
    computationState.getCounter(),
    computationState.getVersion());
origin: apache/flink

private boolean isStateTimedOut(final ComputationState state, final long timestamp) {
  return !isStartState(state) && windowTime > 0L && timestamp - state.getStartTimestamp() >= windowTime;
}
origin: org.apache.flink/flink-cep_2.11

sharedBufferAccessor.releaseNode(computationState.getPreviousBufferEntry());
origin: apache/flink

private boolean isStopState(ComputationState state) {
  State<T> stateObject = getState(state);
  if (stateObject == null) {
    throw new FlinkRuntimeException("State " + state.getCurrentStateName() + " does not exist in the NFA. NFA has states "
      + states.values());
  }
  return stateObject.isStop();
}
origin: org.apache.flink/flink-cep_2.11

  nfaState.getCompletedMatches().poll();
  List<Map<String, List<EventId>>> matchedResult =
    sharedBufferAccessor.extractPatterns(earliestMatch.getPreviousBufferEntry(), earliestMatch.getVersion());
  sharedBufferAccessor.releaseNode(earliestMatch.getPreviousBufferEntry());
  earliestMatch = nfaState.getCompletedMatches().peek();
nfaState.getPartialMatches().removeIf(pm -> pm.getStartEventID() != null && !partialMatches.contains(pm));
org.apache.flink.cep.nfaComputationState

Javadoc

Helper class which encapsulates the currentStateName of the NFA computation. It points to the current currentStateName, the previous entry of the pattern, the current version and the starting timestamp of the overall pattern.

Most used methods

  • <init>
  • createStartState
  • createState
  • getCurrentStateName
  • getStartTimestamp
  • getVersion
  • equals
  • getPreviousBufferEntry
  • getStartEventID
  • getConditionContext
  • getCounter
  • getEvent
  • getCounter,
  • getEvent,
  • getPreviousState,
  • getState,
  • getTimestamp,
  • isFinalState,
  • isStartState,
  • isStopState

Popular in Java

  • Reading from database using SQL prepared statement
  • notifyDataSetChanged (ArrayAdapter)
  • getSharedPreferences (Context)
  • requestLocationUpdates (LocationManager)
  • ObjectMapper (com.fasterxml.jackson.databind)
    ObjectMapper provides functionality for reading and writing JSON, either to and from basic POJOs (Pl
  • Kernel (java.awt.image)
  • BufferedWriter (java.io)
    Wraps an existing Writer and buffers the output. Expensive interaction with the underlying reader is
  • RandomAccessFile (java.io)
    Allows reading from and writing to a file in a random-access manner. This is different from the uni-
  • BigDecimal (java.math)
    An immutable arbitrary-precision signed decimal.A value is represented by an arbitrary-precision "un
  • Set (java.util)
    A Set is a data structure which does not allow duplicate elements.
  • Top 12 Jupyter Notebook extensions
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now