Tabnine Logo
RollingStorage
Code IndexAdd Tabnine to your IDE (free)

How to use
RollingStorage
in
io.pravega.segmentstore.storage.rolling

Best Java code snippets using io.pravega.segmentstore.storage.rolling.RollingStorage (Showing top 20 results out of 315)

origin: pravega/pravega

@Override
public Storage createStorageAdapter() {
  return new AsyncStorageWrapper(new RollingStorage(this.baseStorage), this.executor);
}
origin: pravega/pravega

@Override
public SegmentHandle create(String streamSegmentName) throws StreamSegmentException {
  return create(streamSegmentName, this.defaultRollingPolicy);
}
origin: pravega/pravega

@Override
@SneakyThrows(StreamSegmentException.class)
public boolean exists(String segmentName) {
  try {
    // Try to open-read the segment, this checks both the header file and the existence of the last SegmentChunk.
    openRead(segmentName);
    return true;
  } catch (StreamSegmentNotExistsException ex) {
    return false;
  }
}
origin: pravega/pravega

/**
 * Tests the ability to concat using the header file for those cases when native concat cannot be used because the
 * source Segment has a single SegmentChunk, but it's too large to fit into the Target's active SegmentChunk.
 */
@Test
public void testConcatHeaderSingleFile() throws Exception {
  final int initialTargetLength = (int) DEFAULT_ROLLING_POLICY.getMaxLength() / 2;
  final int bigSourceLength = (int) DEFAULT_ROLLING_POLICY.getMaxLength() - initialTargetLength + 1;
  final String sourceSegmentName = "SourceSegment";
  @Cleanup
  val baseStorage = new InMemoryStorage();
  @Cleanup
  val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
  s.initialize(1);
  // Create a Target Segment and a Source Segment and write some data to them.
  s.create(SEGMENT_NAME);
  val targetHandle = (RollingSegmentHandle) s.openWrite(SEGMENT_NAME);
  val writeStream = new ByteArrayOutputStream();
  populate(s, targetHandle, 1, initialTargetLength, initialTargetLength, writeStream);
  s.create(sourceSegmentName);
  val sourceHandle = (RollingSegmentHandle) s.openWrite(sourceSegmentName);
  populate(s, sourceHandle, 1, bigSourceLength, bigSourceLength, writeStream);
  s.seal(sourceHandle);
  // Concat and verify the handle has been updated accordingly.
  s.concat(targetHandle, initialTargetLength, sourceSegmentName);
  checkConcatResult(s, targetHandle, sourceSegmentName, 2, initialTargetLength + bigSourceLength);
  checkWrittenData(writeStream.toByteArray(), s.openRead(SEGMENT_NAME), s);
}
origin: pravega/pravega

val baseStorage = new InMemoryStorage();
@Cleanup
val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
s.initialize(1);
    () -> s.create(segmentName),
    ex -> ex instanceof StreamSegmentExistsException);
Assert.assertTrue("Non-Header Segment does not exist after failed create() attempt.", baseStorage.exists(segmentName));
Assert.assertTrue("Unexpected result from exists() when called on a non-header Segment.", s.exists(segmentName));
val writeHandle = s.openWrite(segmentName);
val os = new ByteArrayOutputStream();
populate(s, writeHandle, os);
s.seal(writeHandle);
byte[] writtenData = os.toByteArray();
Assert.assertFalse("A header was left behind (after write).",
val rollingInfo = s.getStreamSegmentInfo(segmentName);
Assert.assertTrue("Segment not sealed.", baseInfo.isSealed());
Assert.assertEquals("Unexpected Segment length.", writtenData.length, baseInfo.getLength());
val readHandle = s.openRead(segmentName);
checkWrittenData(writtenData, readHandle, s);
  s.truncate(writeHandle, truncateOffset);
val nonHeaderHandle = s.openWrite(nonHeaderName);
s.concat(nonHeaderHandle, 0, segmentName);
origin: pravega/pravega

@Override
public void concat(SegmentHandle targetHandle, long targetOffset, String sourceSegment) throws StreamSegmentException {
  val target = asWritableHandle(targetHandle);
  ensureOffset(target, targetOffset);
  ensureNotDeleted(target);
  ensureNotSealed(target);
  long traceId = LoggerHelpers.traceEnter(log, "concat", target, targetOffset, sourceSegment);
  RollingSegmentHandle source = (RollingSegmentHandle) openWrite(sourceSegment);
  Preconditions.checkState(source.isSealed(), "Cannot concat segment '%s' into '%s' because it is not sealed.",
      sourceSegment, target.getSegmentName());
    delete(source);
    return;
  refreshChunkExistence(source);
  Preconditions.checkState(source.chunks().stream().allMatch(SegmentChunk::exists),
      "Cannot use Segment '%s' as concat source because it is truncated.", source.getSegmentName());
  if (shouldConcatNatively(source, target)) {
      rollover(target);
      createHeader(target);
    List<SegmentChunk> newSegmentChunks = rebase(source.chunks(), target.length());
    sealActiveChunk(target);
    serializeBeginConcat(target, source);
    this.baseStorage.concat(target.getHeaderHandle(), target.getHeaderLength(), source.getHeaderHandle().getSegmentName());
    target.increaseHeaderLength(source.getHeaderLength());
origin: pravega/pravega

val baseStorage = new InMemoryStorage();
@Cleanup
val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
s.initialize(1);
s.create(SEGMENT_NAME);
val writeHandle = s.openWrite(SEGMENT_NAME);
val readHandle = s.openRead(SEGMENT_NAME); // Open now, before writing, so we force a refresh.
val writeStream = new ByteArrayOutputStream();
populate(s, writeHandle, writeStream);
Assert.assertEquals("Unexpected segment length.", writtenData.length, s.getStreamSegmentInfo(SEGMENT_NAME).getLength());
int checkedLength = 0;
while (checkedLength < writtenData.length) {
origin: pravega/pravega

val baseStorage = new TestStorage();
@Cleanup
val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
s.initialize(1);
s.create(SEGMENT_NAME);
val writeHandle = (RollingSegmentHandle) s.openWrite(SEGMENT_NAME);
populate(s, writeHandle, null);
AssertExtensions.assertThrows(
    "delete() did not propagate proper exception on failure.",
    () -> s.delete(writeHandle),
    ex -> ex instanceof IntentionalException);
Assert.assertTrue("Not expecting segment to be deleted yet.", s.exists(SEGMENT_NAME));
Assert.assertFalse("Expected first SegmentChunk to be marked as deleted.", writeHandle.chunks().get(failAtIndex - 1).exists());
Assert.assertTrue("Expected failed-to-delete SegmentChunk to not be marked as deleted.", writeHandle.chunks().get(failAtIndex).exists());
s.delete(writeHandle);
Assert.assertFalse("Expecting the segment to be deleted.", s.exists(SEGMENT_NAME));
Assert.assertTrue("Expected the handle to be marked as deleted.", writeHandle.isDeleted());
Assert.assertFalse("Expected all SegmentChunks to be marked as deleted.", writeHandle.chunks().stream().anyMatch(SegmentChunk::exists));
origin: pravega/pravega

/**
 * Tests the ability to truncate Sealed Segments.
 */
@Test
public void testTruncateSealed() throws Exception {
  // Write small and large writes, alternatively.
  @Cleanup
  val baseStorage = new TestStorage();
  @Cleanup
  val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
  s.initialize(1);
  // Create a Segment, write some data, then seal it.
  s.create(SEGMENT_NAME);
  val appendHandle = (RollingSegmentHandle) s.openWrite(SEGMENT_NAME);
  val writeStream = new ByteArrayOutputStream();
  populate(s, appendHandle, writeStream);
  s.seal(appendHandle);
  byte[] writtenData = writeStream.toByteArray();
  val truncateHandle = (RollingSegmentHandle) s.openWrite(SEGMENT_NAME);
  Assert.assertTrue("Handle not read-only after sealing.", truncateHandle.isReadOnly());
  Assert.assertTrue("Handle not sealed after sealing.", truncateHandle.isSealed());
  // Test that truncate works in this scenario.
  testProgressiveTruncate(truncateHandle, truncateHandle, writtenData, s, baseStorage);
}
origin: pravega/pravega

/**
 * Tests the case when Create was interrupted after it created the Header file but before populating it.
 */
@Test
public void testCreateRecovery() throws Exception {
  @Cleanup
  val baseStorage = new TestStorage();
  @Cleanup
  val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
  s.initialize(1);
  // Create an empty header file. This simulates a create() operation that failed mid-way.
  baseStorage.create(StreamSegmentNameUtils.getHeaderSegmentName(SEGMENT_NAME));
  Assert.assertFalse("Not expecting Segment to exist.", s.exists(SEGMENT_NAME));
  AssertExtensions.assertThrows(
      "Not expecting Segment to exist (getStreamSegmentInfo).",
      () -> s.getStreamSegmentInfo(SEGMENT_NAME),
      ex -> ex instanceof StreamSegmentNotExistsException);
  AssertExtensions.assertThrows(
      "Not expecting Segment to exist (openHandle).",
      () -> s.openRead(SEGMENT_NAME),
      ex -> ex instanceof StreamSegmentNotExistsException);
  // Retry the operation and verify everything is in place.
  s.create(SEGMENT_NAME);
  val si = s.getStreamSegmentInfo(SEGMENT_NAME);
  Assert.assertEquals("Expected the Segment to have been created.", 0, si.getLength());
}
origin: pravega/pravega

/**
 * Tests the ability to auto-refresh a Write Handle upon offset disagreement.
 */
@Test
public void testRefreshHandleBadOffset() throws Exception {
  // Write small and large writes, alternatively.
  @Cleanup
  val baseStorage = new InMemoryStorage();
  @Cleanup
  val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
  s.initialize(1);
  s.create(SEGMENT_NAME);
  val h1 = s.openWrite(SEGMENT_NAME);
  val h2 = s.openWrite(SEGMENT_NAME); // Open now, before writing, so we force a refresh.
  byte[] data = "data".getBytes();
  s.write(h1, 0, new ByteArrayInputStream(data), data.length);
  s.write(h2, data.length, new ByteArrayInputStream(data), data.length);
  // Check that no file has exceeded its maximum length.
  byte[] expectedData = new byte[data.length * 2];
  System.arraycopy(data, 0, expectedData, 0, data.length);
  System.arraycopy(data, 0, expectedData, data.length, data.length);
  checkWrittenData(expectedData, h2, s);
}
origin: pravega/pravega

@Override
public void delete(SegmentHandle handle) throws StreamSegmentException {
  val h = asReadableHandle(handle);
  long traceId = LoggerHelpers.traceEnter(log, "delete", handle);
      val writeHandle = h.isReadOnly() ? (RollingSegmentHandle) openWrite(handle.getSegmentName()) : h;
      seal(writeHandle);
    deleteChunks(h, s -> true);
    try {
      this.baseStorage.delete(headerHandle);
origin: pravega/pravega

@Override
public int read(SegmentHandle handle, long offset, byte[] buffer, int bufferOffset, int length) throws StreamSegmentException {
  val h = asReadableHandle(handle);
  long traceId = LoggerHelpers.traceEnter(log, "read", handle, offset, length);
  ensureNotDeleted(h);
  Exceptions.checkArrayRange(bufferOffset, length, buffer.length, "bufferOffset", "length");
    val newHandle = (RollingSegmentHandle) openRead(handle.getSegmentName());
    h.refresh(newHandle);
    log.debug("Handle refreshed: {}.", h);
      checkTruncatedSegment(null, h, current);
      if (current.getLength() == 0) {
        checkTruncatedSegment(ex, h, current);
    val newHandle = (RollingSegmentHandle) openRead(handle.getSegmentName());
    h.refresh(newHandle);
    if (h.isDeleted()) {
origin: pravega/pravega

private void checkConcatResult(RollingStorage s, RollingSegmentHandle targetHandle, String sourceSegmentName, int expectedChunkCount, int expectedLength) throws Exception {
  Assert.assertFalse("Expecting the source segment to not exist anymore.", s.exists(sourceSegmentName));
  Assert.assertEquals("Unexpected number of SegmentChunks in target.", expectedChunkCount, targetHandle.chunks().size());
  Assert.assertEquals("Unexpected target length.", expectedLength, targetHandle.length());
  // Reload the handle and verify nothing strange happened in Storage.
  val targetHandle2 = (RollingSegmentHandle) s.openWrite(SEGMENT_NAME);
  Assert.assertEquals("Unexpected number of SegmentChunks in reloaded target handle.", expectedChunkCount, targetHandle2.chunks().size());
  Assert.assertEquals("Unexpected reloaded target length.", targetHandle.length(), targetHandle2.length());
}
origin: pravega/pravega

/**
 * Tests the ability to concat using the header file for those cases when native concat cannot be used because the
 * source Segment has multiple SegmentChunks.
 */
@Test
public void testConcatHeaderMultiFile() throws Exception {
  final int initialTargetLength = (int) DEFAULT_ROLLING_POLICY.getMaxLength() / 2;
  final String sourceSegmentName = "SourceSegment";
  @Cleanup
  val baseStorage = new InMemoryStorage();
  @Cleanup
  val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
  s.initialize(1);
  // Create a Target Segment and a Source Segment and write some data to them.
  s.create(SEGMENT_NAME);
  val targetHandle = (RollingSegmentHandle) s.openWrite(SEGMENT_NAME);
  val writeStream = new ByteArrayOutputStream();
  populate(s, targetHandle, 1, initialTargetLength, initialTargetLength, writeStream);
  s.create(sourceSegmentName);
  val sourceHandle = (RollingSegmentHandle) s.openWrite(sourceSegmentName);
  populate(s, sourceHandle, APPENDS_PER_SEGMENT, initialTargetLength, initialTargetLength, writeStream);
  s.seal(sourceHandle);
  // Concat and verify the handle has been updated accordingly.
  s.concat(targetHandle, initialTargetLength, sourceSegmentName);
  checkConcatResult(s, targetHandle, sourceSegmentName, 1 + sourceHandle.chunks().size(), initialTargetLength + (int) sourceHandle.length());
  checkWrittenData(writeStream.toByteArray(), s.openRead(SEGMENT_NAME), s);
}
origin: pravega/pravega

/**
 * Tests the ability to use native concat for those cases when it's appropriate.
 */
@Test
public void testConcatNatively() throws Exception {
  final int initialTargetLength = (int) DEFAULT_ROLLING_POLICY.getMaxLength() / 2;
  final int initialSourceLength = (int) DEFAULT_ROLLING_POLICY.getMaxLength() - initialTargetLength;
  final String sourceSegmentName = "SourceSegment";
  @Cleanup
  val baseStorage = new InMemoryStorage();
  @Cleanup
  val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
  s.initialize(1);
  // Create a target Segment and write a little data to it.
  s.create(SEGMENT_NAME);
  val targetHandle = (RollingSegmentHandle) s.openWrite(SEGMENT_NAME);
  val writeStream = new ByteArrayOutputStream();
  populate(s, targetHandle, 1, initialTargetLength, initialTargetLength, writeStream);
  // Create a source Segment and write a little data to it, making sure it is small enough to fit into the target
  // when we need to concat.
  s.create(sourceSegmentName);
  val sourceHandle = (RollingSegmentHandle) s.openWrite(sourceSegmentName);
  populate(s, sourceHandle, 1, initialSourceLength, initialSourceLength, writeStream);
  s.seal(sourceHandle);
  // Concat and verify the handle has been updated accordingly.
  s.concat(targetHandle, initialTargetLength, sourceSegmentName);
  checkConcatResult(s, targetHandle, sourceSegmentName, 1, initialTargetLength + initialSourceLength);
  checkWrittenData(writeStream.toByteArray(), s.openRead(SEGMENT_NAME), s);
}
origin: pravega/pravega

  @Override
  public Storage createStorageAdapter() {
    HDFSStorage s = new HDFSStorage(this.config);
    return new AsyncStorageWrapper(new RollingStorage(s), this.executor);
  }
}
origin: pravega/pravega

@Override
public SegmentProperties getStreamSegmentInfo(String segmentName) throws StreamSegmentException {
  val handle = (RollingSegmentHandle) openRead(segmentName);
  return StreamSegmentInformation
      .builder()
      .name(handle.getSegmentName())
      .sealed(handle.isSealed())
      .length(handle.length())
      .build();
}
origin: pravega/pravega

val baseStorage = new TestStorage();
@Cleanup
val s = new RollingStorage(baseStorage, DEFAULT_ROLLING_POLICY);
s.initialize(1);
s.create(SEGMENT_NAME);
val writeHandle = (RollingSegmentHandle) s.openWrite(SEGMENT_NAME);
val readHandle = s.openRead(SEGMENT_NAME); // Open now, before writing, so we force a refresh.
val writeStream = new ByteArrayOutputStream();
populate(s, writeHandle, writeStream);
s.create(targetSegmentName);
val targetSegmentHandle = s.openWrite(targetSegmentName);
s.seal(writeHandle);
AssertExtensions.assertThrows(
    "concat() allowed using a truncated segment as a source.",
    () -> s.concat(targetSegmentHandle, 0, SEGMENT_NAME),
    ex -> ex instanceof IllegalStateException);
origin: pravega/pravega

  @Override
  public Storage createStorageAdapter() {
    FileSystemStorage s = new FileSystemStorage(this.config);
    return new AsyncStorageWrapper(new RollingStorage(s), this.executor);
  }
}
io.pravega.segmentstore.storage.rollingRollingStorage

Javadoc

A layer on top of a general SyncStorage implementation that allows rolling Segments on a size-based policy and truncating them at various offsets. Every Segment that is created using this Storage is made up of a Header and zero or more SegmentChunks. * The Header contains the Segment's Rolling Policy, as well as an ordered list of Offset-to-SegmentChunk pointers for all the SegmentChunks in the Segment. * The SegmentChunks contain data that their Segment is made of. A SegmentChunk starting at offset N with length L contains data for offsets [N,N+L) of the Segment. * A Segment is considered to exist if it has a non-empty Header and if its last SegmentChunk exists. If it does not have any SegmentChunks (freshly created), it is considered to exist. * A Segment is considered to be Sealed if its Header is sealed. A note about compatibility: * The RollingStorage wrapper is fully compatible with data and Segments that were created before RollingStorage was applied. That means that it can access and modify existing Segments that were created without a Header, but all new Segments will have a Header. As such, there is no need to do any sort of migration when starting to use this class. * Should the RollingStorage need to be discontinued without having to do a migration: ** The create() method should be overridden (in a derived class) to create new Segments natively (without a Header). ** The concat() method should be overridden (in a derived class) to not convert Segments without Header into Segments with Header. ** Existing Segments (made up of Header and multi-SegmentChunks) can still be accessed by means of this class.

Most used methods

  • <init>
    Creates a new instance of the RollingStorage class.
  • create
  • delete
  • openRead
  • openWrite
  • seal
  • asReadableHandle
  • asWritableHandle
  • canTruncate
  • checkIfEmptyAndNotSealed
  • checkTruncatedSegment
  • concat
  • checkTruncatedSegment,
  • concat,
  • createChunk,
  • createHeader,
  • deleteChunks,
  • ensureNotDeleted,
  • ensureNotSealed,
  • ensureOffset,
  • exists,
  • getHeaderInfo

Popular in Java

  • Making http requests using okhttp
  • scheduleAtFixedRate (ScheduledExecutorService)
  • setContentView (Activity)
  • startActivity (Activity)
  • ObjectMapper (com.fasterxml.jackson.databind)
    ObjectMapper provides functionality for reading and writing JSON, either to and from basic POJOs (Pl
  • BufferedReader (java.io)
    Wraps an existing Reader and buffers the input. Expensive interaction with the underlying reader is
  • Date (java.sql)
    A class which can consume and produce dates in SQL Date format. Dates are represented in SQL as yyyy
  • Time (java.sql)
    Java representation of an SQL TIME value. Provides utilities to format and parse the time's represen
  • NoSuchElementException (java.util)
    Thrown when trying to retrieve an element past the end of an Enumeration or Iterator.
  • Set (java.util)
    A Set is a data structure which does not allow duplicate elements.
  • From CI to AI: The AI layer in your organization
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now