public static List<Row> robustRead(ByteBuffer key, QueryPath qp, List<ByteBuffer> columns, ConsistencyLevel cl) throws IOException { ReadCommand rc = new SliceByNamesReadCommand(CassandraUtils.keySpace, key, qp, columns); return robustRead(cl, rc); }
names.add(CellNames.simpleDense(ByteBufferUtil.zeroByteBuffer(1))); NamesQueryFilter nqf = new NamesQueryFilter(names); SliceByNamesReadCommand cmd = new SliceByNamesReadCommand(ks, ByteBufferUtil.zeroByteBuffer(1), cf, 1L, nqf); StorageProxy.read(ImmutableList.<ReadCommand> of(cmd), ConsistencyLevel.QUORUM); log.info("Read on CF {} in KS {} succeeded", cf, ks);
public LucandraTermInfo[] loadFilteredTerms(Term term, List<ByteBuffer> docNums) throws IOException { long start = System.currentTimeMillis(); ColumnParent parent = new ColumnParent(); parent.setColumn_family(CassandraUtils.termVecColumnFamily); ByteBuffer key; try { key = CassandraUtils.hashKeyBytes(indexName.getBytes("UTF-8"), CassandraUtils.delimeterBytes, term.field() .getBytes("UTF-8"), CassandraUtils.delimeterBytes, term.text().getBytes("UTF-8")); } catch (UnsupportedEncodingException e2) { throw new RuntimeException("JVM doesn't support UTF-8", e2); } ReadCommand rc = new SliceByNamesReadCommand(CassandraUtils.keySpace, key, parent, docNums); List<Row> rows = CassandraUtils.robustRead(CassandraUtils.consistency, rc); LucandraTermInfo[] termInfo = null; if (rows != null && rows.size() > 0 && rows.get(0) != null && rows.get(0).cf != null) { termInfo = TermCache.convertTermInfo(rows.get(0).cf.getSortedColumns()); } long end = System.currentTimeMillis(); if (logger.isDebugEnabled()) logger.debug("loadFilterdTerms: " + term + "(" + termInfo == null ? 0 : termInfo.length + ") took " + (end - start) + "ms"); return termInfo; }
ReadCommand rc = new SliceByNamesReadCommand(CassandraUtils.keySpace, key, CassandraUtils.metaColumnPath, Arrays.asList(CassandraUtils.documentMetaFieldBytes)); readCommands.add(new SliceByNamesReadCommand(CassandraUtils.keySpace, key, new ColumnParent() .setColumn_family(CassandraUtils.termVecColumnFamily), Arrays.asList(ByteBuffer .wrap(CassandraUtils.writeVInt(docI)))));
.add(new SliceByNamesReadCommand(CassandraUtils.keySpace, key, columnParent, fieldNames));
public static ReadCommand create(String ksName, ByteBuffer key, String cfName, long timestamp, IDiskAtomFilter filter) { if (filter instanceof SliceQueryFilter) return new SliceFromReadCommand(ksName, key, cfName, timestamp, (SliceQueryFilter)filter); else return new SliceByNamesReadCommand(ksName, key, cfName, timestamp, (NamesQueryFilter)filter); }
public ReadCommand copy() { return new SliceByNamesReadCommand(ksName, key, cfName, timestamp, filter).setIsDigestQuery(isDigestQuery()); }
private void retryDummyRead(String ks, String cf) throws PermanentBackendException { final long limit = System.currentTimeMillis() + (60L * 1000L); while (System.currentTimeMillis() < limit) { try { SortedSet<ByteBuffer> ss = new TreeSet<ByteBuffer>(); ss.add(ByteBufferUtil.zeroByteBuffer(1)); NamesQueryFilter nqf = new NamesQueryFilter(ss); SliceByNamesReadCommand cmd = new SliceByNamesReadCommand(ks, ByteBufferUtil.zeroByteBuffer(1), cf, 1L, nqf); StorageProxy.read(ImmutableList.<ReadCommand> of(cmd), ConsistencyLevel.QUORUM); log.info("Read on CF {} in KS {} succeeded", cf, ks); return; } catch (Throwable t) { log.warn("Failed to read CF {} in KS {} following creation", cf, ks, t); } try { Thread.sleep(1000L); } catch (InterruptedException e) { throw new PermanentBackendException(e); } } throw new PermanentBackendException("Timed out while attempting to read CF " + cf + " in KS " + ks + " following creation"); } }
public ReadCommand deserialize(DataInput in, int version) throws IOException { boolean isDigest = in.readBoolean(); String keyspaceName = in.readUTF(); ByteBuffer key = ByteBufferUtil.readWithShortLength(in); String cfName = in.readUTF(); long timestamp = in.readLong(); CFMetaData metadata = Schema.instance.getCFMetaData(keyspaceName, cfName); if (metadata == null) { String message = String.format("Got slice command for nonexistent table %s.%s. If the table was just " + "created, this is likely due to the schema not being fully propagated. Please wait for schema " + "agreement on table creation.", keyspaceName, cfName); throw new UnknownColumnFamilyException(message, null); } NamesQueryFilter filter = metadata.comparator.namesQueryFilterSerializer().deserialize(in, version); return new SliceByNamesReadCommand(keyspaceName, key, cfName, timestamp, filter).setIsDigestQuery(isDigest); }
names.add(CellNames.simpleDense(ByteBufferUtil.zeroByteBuffer(1))); NamesQueryFilter nqf = new NamesQueryFilter(names); SliceByNamesReadCommand cmd = new SliceByNamesReadCommand(ks, ByteBufferUtil.zeroByteBuffer(1), cf, 1L, nqf); StorageProxy.read(ImmutableList.<ReadCommand> of(cmd), ConsistencyLevel.QUORUM); log.info("Read on CF {} in KS {} succeeded", cf, ks);
SliceByNamesReadCommand readCommand = new SliceByNamesReadCommand(ks, rKey, new QueryPath(cf, null, null), cols); readCommand.setDigestQuery(false); commands.add(readCommand);
private void getCurrentValuesFromCFS(List<CounterUpdateCell> counterUpdateCells, ColumnFamilyStore cfs, ClockAndCount[] currentValues) { SortedSet<CellName> names = new TreeSet<>(cfs.metadata.comparator); for (int i = 0; i < currentValues.length; i++) if (currentValues[i] == null) names.add(counterUpdateCells.get(i).name()); ReadCommand cmd = new SliceByNamesReadCommand(getKeyspaceName(), key(), cfs.metadata.cfName, Long.MIN_VALUE, new NamesQueryFilter(names)); Row row = cmd.getRow(cfs.keyspace); ColumnFamily cf = row == null ? null : row.cf; for (int i = 0; i < currentValues.length; i++) { if (currentValues[i] != null) continue; Cell cell = cf == null ? null : cf.getColumn(counterUpdateCells.get(i).name()); if (cell == null || !cell.isLive()) // absent or a tombstone. currentValues[i] = ClockAndCount.BLANK; else currentValues[i] = CounterContext.instance().getLocalClockAndCount(cell.value()); } }
commands.add(new SliceByNamesReadCommand(metadata.ksName, key, select.getColumnFamily(), now, new NamesQueryFilter(columnNames)));