Tabnine Logo
SimpleVersionedSerialization.writeVersionAndSerialize
Code IndexAdd Tabnine to your IDE (free)

How to use
writeVersionAndSerialize
method
in
org.apache.flink.core.io.SimpleVersionedSerialization

Best Java code snippets using org.apache.flink.core.io.SimpleVersionedSerialization.writeVersionAndSerialize (Showing top 12 results out of 315)

origin: apache/flink

private void snapshotActiveBuckets(
    final long checkpointId,
    final ListState<byte[]> bucketStatesContainer) throws Exception {
  for (Bucket<IN, BucketID> bucket : activeBuckets.values()) {
    final BucketState<BucketID> bucketState = bucket.onReceptionOfCheckpoint(checkpointId);
    final byte[] serializedBucketState = SimpleVersionedSerialization
        .writeVersionAndSerialize(bucketStateSerializer, bucketState);
    bucketStatesContainer.add(serializedBucketState);
    if (LOG.isDebugEnabled()) {
      LOG.debug("Subtask {} checkpointing: {}", subtaskIndex, bucketState);
    }
  }
}
origin: apache/flink

@VisibleForTesting
void serializeV1(BucketState<BucketID> state, DataOutputView out) throws IOException {
  SimpleVersionedSerialization.writeVersionAndSerialize(bucketIdSerializer, state.getBucketId(), out);
  out.writeUTF(state.getBucketPath().toString());
  out.writeLong(state.getInProgressFileCreationTime());
  // put the current open part file
  if (state.hasInProgressResumableFile()) {
    final RecoverableWriter.ResumeRecoverable resumable = state.getInProgressResumableFile();
    out.writeBoolean(true);
    SimpleVersionedSerialization.writeVersionAndSerialize(resumableSerializer, resumable, out);
  }
  else {
    out.writeBoolean(false);
  }
  // put the map of pending files per checkpoint
  final Map<Long, List<RecoverableWriter.CommitRecoverable>> pendingCommitters = state.getCommittableFilesPerCheckpoint();
  // manually keep the version here to safe some bytes
  out.writeInt(commitableSerializer.getVersion());
  out.writeInt(pendingCommitters.size());
  for (Entry<Long, List<RecoverableWriter.CommitRecoverable>> resumablesForCheckpoint : pendingCommitters.entrySet()) {
    List<RecoverableWriter.CommitRecoverable> resumables = resumablesForCheckpoint.getValue();
    out.writeLong(resumablesForCheckpoint.getKey());
    out.writeInt(resumables.size());
    for (RecoverableWriter.CommitRecoverable resumable : resumables) {
      byte[] serialized = commitableSerializer.serialize(resumable);
      out.writeInt(serialized.length);
      out.write(serialized);
    }
  }
}
origin: apache/flink

@Test
public void testSerializationRoundTrip() throws IOException {
  final SimpleVersionedSerializer<String> utfEncoder = new SimpleVersionedSerializer<String>() {
    private static final int VERSION = Integer.MAX_VALUE / 2; // version should occupy many bytes
    @Override
    public int getVersion() {
      return VERSION;
    }
    @Override
    public byte[] serialize(String str) throws IOException {
      return str.getBytes(StandardCharsets.UTF_8);
    }
    @Override
    public String deserialize(int version, byte[] serialized) throws IOException {
      assertEquals(VERSION, version);
      return new String(serialized, StandardCharsets.UTF_8);
    }
  };
  final String testString = "dugfakgs";
  final DataOutputSerializer out = new DataOutputSerializer(32);
  SimpleVersionedSerialization.writeVersionAndSerialize(utfEncoder, testString, out);
  final byte[] outBytes = out.getCopyOfBuffer();
  final byte[] bytes = SimpleVersionedSerialization.writeVersionAndSerialize(utfEncoder, testString);
  assertArrayEquals(bytes, outBytes);
  final DataInputDeserializer in = new DataInputDeserializer(bytes);
  final String deserialized = SimpleVersionedSerialization.readVersionAndDeSerialize(utfEncoder, in);
  final String deserializedFromBytes = SimpleVersionedSerialization.readVersionAndDeSerialize(utfEncoder, outBytes);
  assertEquals(testString, deserialized);
  assertEquals(testString, deserializedFromBytes);
}
origin: apache/flink

@Test
public void testSerializationEmpty() throws IOException {
  final File testFolder = tempFolder.newFolder();
  final FileSystem fs = FileSystem.get(testFolder.toURI());
  final RecoverableWriter writer = fs.createRecoverableWriter();
  final Path testBucket = new Path(testFolder.getPath(), "test");
  final BucketState<String> bucketState = new BucketState<>(
      "test", testBucket, Long.MAX_VALUE, null, new HashMap<>());
  final SimpleVersionedSerializer<BucketState<String>> serializer =
      new BucketStateSerializer<>(
          writer.getResumeRecoverableSerializer(),
          writer.getCommitRecoverableSerializer(),
          SimpleVersionedStringSerializer.INSTANCE
      );
  byte[] bytes = SimpleVersionedSerialization.writeVersionAndSerialize(serializer, bucketState);
  final BucketState<String> recoveredState =  SimpleVersionedSerialization.readVersionAndDeSerialize(serializer, bytes);
  Assert.assertEquals(testBucket, recoveredState.getBucketPath());
  Assert.assertNull(recoveredState.getInProgressResumableFile());
  Assert.assertTrue(recoveredState.getCommittableFilesPerCheckpoint().isEmpty());
}
origin: apache/flink

SimpleVersionedSerialization.writeVersionAndSerialize(emptySerializer, "abc", out);
final byte[] outBytes = out.getCopyOfBuffer();
final byte[] bytes = SimpleVersionedSerialization.writeVersionAndSerialize(emptySerializer, "abc");
assertArrayEquals(bytes, outBytes);
origin: apache/flink

);
byte[] bytes = SimpleVersionedSerialization.writeVersionAndSerialize(serializer, bucketState);
origin: apache/flink

stream.close();
byte[] bytes = SimpleVersionedSerialization.writeVersionAndSerialize(serializer, bucketState);
origin: apache/flink

@Test
public void testSerializationOnlyInProgress() throws IOException {
  final File testFolder = tempFolder.newFolder();
  final FileSystem fs = FileSystem.get(testFolder.toURI());
  final Path testBucket = new Path(testFolder.getPath(), "test");
  final RecoverableWriter writer = fs.createRecoverableWriter();
  final RecoverableFsDataOutputStream stream = writer.open(testBucket);
  stream.write(IN_PROGRESS_CONTENT.getBytes(Charset.forName("UTF-8")));
  final RecoverableWriter.ResumeRecoverable current = stream.persist();
  final BucketState<String> bucketState = new BucketState<>(
      "test", testBucket, Long.MAX_VALUE, current, new HashMap<>());
  final SimpleVersionedSerializer<BucketState<String>> serializer =
      new BucketStateSerializer<>(
          writer.getResumeRecoverableSerializer(),
          writer.getCommitRecoverableSerializer(),
          SimpleVersionedStringSerializer.INSTANCE
      );
  final byte[] bytes = SimpleVersionedSerialization.writeVersionAndSerialize(serializer, bucketState);
  // to simulate that everything is over for file.
  stream.close();
  final BucketState<String> recoveredState =  SimpleVersionedSerialization.readVersionAndDeSerialize(serializer, bytes);
  Assert.assertEquals(testBucket, recoveredState.getBucketPath());
  FileStatus[] statuses = fs.listStatus(testBucket.getParent());
  Assert.assertEquals(1L, statuses.length);
  Assert.assertTrue(
      statuses[0].getPath().getPath().startsWith(
          (new Path(testBucket.getParent(), ".test.inprogress")).toString())
  );
}
origin: org.apache.flink/flink-streaming-java_2.11

private void snapshotActiveBuckets(
    final long checkpointId,
    final ListState<byte[]> bucketStatesContainer) throws Exception {
  for (Bucket<IN, BucketID> bucket : activeBuckets.values()) {
    final BucketState<BucketID> bucketState = bucket.onReceptionOfCheckpoint(checkpointId);
    final byte[] serializedBucketState = SimpleVersionedSerialization
        .writeVersionAndSerialize(bucketStateSerializer, bucketState);
    bucketStatesContainer.add(serializedBucketState);
    if (LOG.isDebugEnabled()) {
      LOG.debug("Subtask {} checkpointing: {}", subtaskIndex, bucketState);
    }
  }
}
origin: org.apache.flink/flink-streaming-java

private void snapshotActiveBuckets(
    final long checkpointId,
    final ListState<byte[]> bucketStatesContainer) throws Exception {
  for (Bucket<IN, BucketID> bucket : activeBuckets.values()) {
    final BucketState<BucketID> bucketState = bucket.onReceptionOfCheckpoint(checkpointId);
    final byte[] serializedBucketState = SimpleVersionedSerialization
        .writeVersionAndSerialize(bucketStateSerializer, bucketState);
    bucketStatesContainer.add(serializedBucketState);
    if (LOG.isDebugEnabled()) {
      LOG.debug("Subtask {} checkpointing: {}", subtaskIndex, bucketState);
    }
  }
}
origin: org.apache.flink/flink-streaming-java

@VisibleForTesting
void serializeV1(BucketState<BucketID> state, DataOutputView out) throws IOException {
  SimpleVersionedSerialization.writeVersionAndSerialize(bucketIdSerializer, state.getBucketId(), out);
  out.writeUTF(state.getBucketPath().toString());
  out.writeLong(state.getInProgressFileCreationTime());
  // put the current open part file
  if (state.hasInProgressResumableFile()) {
    final RecoverableWriter.ResumeRecoverable resumable = state.getInProgressResumableFile();
    out.writeBoolean(true);
    SimpleVersionedSerialization.writeVersionAndSerialize(resumableSerializer, resumable, out);
  }
  else {
    out.writeBoolean(false);
  }
  // put the map of pending files per checkpoint
  final Map<Long, List<RecoverableWriter.CommitRecoverable>> pendingCommitters = state.getCommittableFilesPerCheckpoint();
  // manually keep the version here to safe some bytes
  out.writeInt(commitableSerializer.getVersion());
  out.writeInt(pendingCommitters.size());
  for (Entry<Long, List<RecoverableWriter.CommitRecoverable>> resumablesForCheckpoint : pendingCommitters.entrySet()) {
    List<RecoverableWriter.CommitRecoverable> resumables = resumablesForCheckpoint.getValue();
    out.writeLong(resumablesForCheckpoint.getKey());
    out.writeInt(resumables.size());
    for (RecoverableWriter.CommitRecoverable resumable : resumables) {
      byte[] serialized = commitableSerializer.serialize(resumable);
      out.writeInt(serialized.length);
      out.write(serialized);
    }
  }
}
origin: org.apache.flink/flink-streaming-java_2.11

@VisibleForTesting
void serializeV1(BucketState<BucketID> state, DataOutputView out) throws IOException {
  SimpleVersionedSerialization.writeVersionAndSerialize(bucketIdSerializer, state.getBucketId(), out);
  out.writeUTF(state.getBucketPath().toString());
  out.writeLong(state.getInProgressFileCreationTime());
  // put the current open part file
  if (state.hasInProgressResumableFile()) {
    final RecoverableWriter.ResumeRecoverable resumable = state.getInProgressResumableFile();
    out.writeBoolean(true);
    SimpleVersionedSerialization.writeVersionAndSerialize(resumableSerializer, resumable, out);
  }
  else {
    out.writeBoolean(false);
  }
  // put the map of pending files per checkpoint
  final Map<Long, List<RecoverableWriter.CommitRecoverable>> pendingCommitters = state.getCommittableFilesPerCheckpoint();
  // manually keep the version here to safe some bytes
  out.writeInt(commitableSerializer.getVersion());
  out.writeInt(pendingCommitters.size());
  for (Entry<Long, List<RecoverableWriter.CommitRecoverable>> resumablesForCheckpoint : pendingCommitters.entrySet()) {
    List<RecoverableWriter.CommitRecoverable> resumables = resumablesForCheckpoint.getValue();
    out.writeLong(resumablesForCheckpoint.getKey());
    out.writeInt(resumables.size());
    for (RecoverableWriter.CommitRecoverable resumable : resumables) {
      byte[] serialized = commitableSerializer.serialize(resumable);
      out.writeInt(serialized.length);
      out.write(serialized);
    }
  }
}
org.apache.flink.core.ioSimpleVersionedSerializationwriteVersionAndSerialize

Javadoc

Serializes the version and datum into a byte array. The first four bytes will be occupied by the version (as returned by SimpleVersionedSerializer#getVersion()), written in big-endian encoding. The remaining bytes will be the serialized datum, as produced by SimpleVersionedSerializer#serialize(Object). The resulting array will hence be four bytes larger than the serialized datum.

Data serialized via this method can be deserialized via #readVersionAndDeSerialize(SimpleVersionedSerializer,byte[]).

Popular methods of SimpleVersionedSerialization

  • readVersionAndDeSerialize
    Deserializes the version and datum from a byte array. The first four bytes will be read as the versi

Popular in Java

  • Creating JSON documents from java classes using gson
  • setContentView (Activity)
  • putExtra (Intent)
  • addToBackStack (FragmentTransaction)
  • ObjectMapper (com.fasterxml.jackson.databind)
    ObjectMapper provides functionality for reading and writing JSON, either to and from basic POJOs (Pl
  • InputStreamReader (java.io)
    A class for turning a byte stream into a character stream. Data read from the source input stream is
  • BitSet (java.util)
    The BitSet class implements abit array [http://en.wikipedia.org/wiki/Bit_array]. Each element is eit
  • UUID (java.util)
    UUID is an immutable representation of a 128-bit universally unique identifier (UUID). There are mul
  • JComboBox (javax.swing)
  • FileUtils (org.apache.commons.io)
    General file manipulation utilities. Facilities are provided in the following areas: * writing to a
  • Best IntelliJ plugins
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now