public static RowIndexEntry create(long position, DeletionTime deletionTime, ColumnIndex index) { assert index != null; assert deletionTime != null; // we only consider the columns summary when determining whether to create an IndexedEntry, // since if there are insufficient columns to be worth indexing we're going to seek to // the beginning of the row anyway, so we might as well read the tombstone there as well. if (index.columnsIndex.size() > 1) return new IndexedEntry(position, deletionTime, index.columnsIndex); else return new RowIndexEntry(position); }
public void serialize(RowIndexEntry rie, DataOutputPlus out) throws IOException { out.writeLong(rie.position); out.writeInt(rie.promotedSize(type)); if (rie.isIndexed()) { DeletionTime.serializer.serialize(rie.deletionTime(), out); out.writeInt(rie.columnsIndex().size()); ISerializer<IndexHelper.IndexInfo> idxSerializer = type.indexSerializer(); for (IndexHelper.IndexInfo info : rie.columnsIndex()) idxSerializer.serialize(info, out); } }
/** * @return true if this index entry contains the row-level tombstone and column summary. Otherwise, * caller should fetch these from the row header. */ public boolean isIndexed() { return columnsIndexCount() > 1; }
public IndexState(Reader reader, ClusteringComparator comparator, RowIndexEntry indexEntry, boolean reversed, FileHandle indexFile) { this.reader = reader; this.comparator = comparator; this.indexEntry = indexEntry; this.indexInfoRetriever = indexEntry.openWithIndex(indexFile); this.reversed = reversed; this.currentIndexIdx = reversed ? indexEntry.columnsIndexCount() : -1; }
boolean needSeekAtPartitionStart = !indexEntry.isIndexed() || !columns.fetchedColumns().statics.isEmpty(); this.partitionLevelDeletion = indexEntry.deletionTime(); this.staticRow = Rows.EMPTY_STATIC_ROW; this.reader = needsReader ? createReader(indexEntry, file, shouldCloseFile) : null;
if (rowIndexEntry == null || !rowIndexEntry.indexOnHeap()) return null; try (RowIndexEntry.IndexInfoRetriever onHeapRetriever = rowIndexEntry.openWithIndex(null)) IndexInfo column = onHeapRetriever.columnsIndex(filter.isReversed() ? rowIndexEntry.columnsIndexCount() - 1 : 0); ClusteringPrefix lowerBoundPrefix = filter.isReversed() ? column.lastName : column.firstName; assert lowerBoundPrefix.getRawValues().length <= sstable.metadata.comparator.size() :
if (!indexEntry.isIndexed()) indexList = indexEntry.columnsIndex(); if (!indexEntry.isIndexed()) cf.delete(indexEntry.deletionTime());
protected Reader createReaderInternal(RowIndexEntry indexEntry, FileDataInput file, boolean shouldCloseFile) { return indexEntry.isIndexed() ? new ForwardIndexedReader(indexEntry, file, shouldCloseFile) : new ForwardReader(file, shouldCloseFile); }
public static RowIndexEntry rawAppend(ColumnFamily cf, long startPosition, DecoratedKey key, DataOutputPlus out) throws IOException { assert cf.hasColumns() || cf.isMarkedForDelete(); ColumnIndex.Builder builder = new ColumnIndex.Builder(cf, key.getKey(), out); ColumnIndex index = builder.build(cf); out.writeShort(END_OF_ROW); return RowIndexEntry.create(startPosition, cf.deletionInfo().getTopLevelDeletion(), index); }
public void serializeForCache(RowIndexEntry<IndexInfo> rie, DataOutputPlus out) throws IOException { assert version.storeRows(); rie.serializeForCache(out); }
public void serialize(RowIndexEntry<IndexInfo> rie, DataOutputPlus out, ByteBuffer indexInfo) throws IOException { assert version.storeRows() : "We read old index files but we should never write them"; rie.serialize(out, idxInfoSerializer, indexInfo); }
this.indexes = indexEntry.columnsIndex(); emptyColumnFamily = ArrayBackedSortedColumns.factory.create(sstable.metadata); if (indexes.isEmpty()) emptyColumnFamily.delete(indexEntry.deletionTime()); fetcher = new IndexedBlockFetcher(indexEntry.position);
boolean needSeekAtPartitionStart = !indexEntry.isIndexed() || !columns.fetchedColumns().statics.isEmpty(); this.partitionLevelDeletion = indexEntry.deletionTime(); this.staticRow = Rows.EMPTY_STATIC_ROW; this.reader = needsReader ? createReader(indexEntry, file, shouldCloseFile) : null;
if (rowIndexEntry == null || !rowIndexEntry.indexOnHeap()) return null; try (RowIndexEntry.IndexInfoRetriever onHeapRetriever = rowIndexEntry.openWithIndex(null)) IndexInfo column = onHeapRetriever.columnsIndex(filter.isReversed() ? rowIndexEntry.columnsIndexCount() - 1 : 0); ClusteringPrefix lowerBoundPrefix = filter.isReversed() ? column.lastName : column.firstName; assert lowerBoundPrefix.getRawValues().length <= sstable.metadata.comparator.size() :
public IndexState(Reader reader, ClusteringComparator comparator, RowIndexEntry indexEntry, boolean reversed, FileHandle indexFile) { this.reader = reader; this.comparator = comparator; this.indexEntry = indexEntry; this.indexInfoRetriever = indexEntry.openWithIndex(indexFile); this.reversed = reversed; this.currentIndexIdx = reversed ? indexEntry.columnsIndexCount() : -1; }
protected Reader createReaderInternal(RowIndexEntry indexEntry, FileDataInput file, boolean shouldCloseFile) { return indexEntry.isIndexed() ? new ReverseIndexedReader(indexEntry, file, shouldCloseFile) : new ReverseReader(file, shouldCloseFile); }
return RowIndexEntry.create(currentPosition, emptyColumnFamily.deletionInfo().getTopLevelDeletion(), columnsIndex);
public void serializeForCache(RowIndexEntry<IndexInfo> rie, DataOutputPlus out) throws IOException { assert version.storeRows(); rie.serializeForCache(out); }
public void serialize(RowIndexEntry<IndexInfo> rie, DataOutputPlus out, ByteBuffer indexInfo) throws IOException { assert version.storeRows() : "We read old index files but we should never write them"; rie.serialize(out, idxInfoSerializer, indexInfo); }
/** * @return true if this index entry contains the row-level tombstone and column summary. Otherwise, * caller should fetch these from the row header. */ public boolean isIndexed() { return columnsIndexCount() > 1; }