partitionUpdates.add(wrappedBuffer(partitionUpdateCodec.toJsonBytes(partitionUpdate))); writer.getVerificationTask() .map(Executors::callable) .ifPresent(verificationTasks::add); .mapToLong(HiveWriter::getWrittenBytes) .sum(); validationCpuNanos = writers.stream() .mapToLong(HiveWriter::getValidationCpuNanos) .sum(); .map(future -> (ListenableFuture<?>) future) .collect(toList()); return Futures.transform(Futures.allAsList(futures), input -> result, directExecutor());
@VisibleForTesting public Backoff(int minTries, Duration maxFailureInterval, Ticker ticker, List<Duration> backoffDelayIntervals) { checkArgument(minTries > 0, "minTries must be at least 1"); requireNonNull(maxFailureInterval, "maxFailureInterval is null"); requireNonNull(ticker, "ticker is null"); requireNonNull(backoffDelayIntervals, "backoffDelayIntervals is null"); checkArgument(!backoffDelayIntervals.isEmpty(), "backoffDelayIntervals must contain at least one entry"); this.minTries = minTries; this.maxFailureIntervalNanos = maxFailureInterval.roundTo(NANOSECONDS); this.ticker = ticker; this.backoffDelayIntervalsNanos = backoffDelayIntervals.stream() .mapToLong(duration -> duration.roundTo(NANOSECONDS)) .toArray(); }
@Override public long getLastSequenceId() { // Return the highest sequence id across all partitions. This will be correct, // since there is a single id generator across all partitions for the same producer return producers.stream().map(Producer::getLastSequenceId).mapToLong(Long::longValue).max().orElse(-1); }
/** * If a value is present in {@code optional}, returns a stream containing only that element, * otherwise returns an empty stream. * * <p><b>Java 9 users:</b> use {@code optional.stream()} instead. */ public static LongStream stream(OptionalLong optional) { return optional.isPresent() ? LongStream.of(optional.getAsLong()) : LongStream.empty(); }
@Override public List<Long> getTxnIds() { if (txnIds != null) { return txnIds; } return LongStream.rangeClosed(fromTxnId, toTxnId) .boxed().collect(Collectors.toList()); }
.map(source -> new TaskSource( source.getPlanNodeId(), source.getSplits().stream() .filter(scheduledSplit -> scheduledSplit.getSequenceId() > currentMaxAcknowledgedSplit) .collect(Collectors.toSet()), .collect(toList()); .flatMap(source -> source.getSplits().stream()) .mapToLong(ScheduledSplit::getSequenceId) .max() .orElse(maxAcknowledgedSplit); return updatedUnpartitionedSources;
int[] intArray = (int[]) o; return Arrays.stream(intArray) .mapToObj((element) -> convertToORCObject(TypeInfoFactory.getPrimitiveTypeInfo("int"), element)) .collect(Collectors.toList()); .mapToObj((element) -> convertToORCObject(TypeInfoFactory.getPrimitiveTypeInfo("bigint"), element)) .collect(Collectors.toList()); return IntStream.range(0, floatArray.length) .mapToDouble(i -> floatArray[i]) .mapToObj((element) -> convertToORCObject(TypeInfoFactory.getPrimitiveTypeInfo("float"), (float) element)) .collect(Collectors.toList()); .mapToObj((element) -> convertToORCObject(TypeInfoFactory.getPrimitiveTypeInfo("double"), element)) .collect(Collectors.toList());
@Override public void enqueue(int partitionNumber, List<SerializedPage> pages) { requireNonNull(pages, "pages is null"); // ignore pages after "no more pages" is set // this can happen with a limit query if (!state.get().canAddPages()) { return; } // reserve memory long bytesAdded = pages.stream().mapToLong(SerializedPage::getRetainedSizeInBytes).sum(); memoryManager.updateMemoryUsage(bytesAdded); // update stats long rowCount = pages.stream().mapToLong(SerializedPage::getPositionCount).sum(); totalRowsAdded.addAndGet(rowCount); totalPagesAdded.addAndGet(pages.size()); // create page reference counts with an initial single reference List<SerializedPageReference> serializedPageReferences = pages.stream() .map(bufferedPage -> new SerializedPageReference(bufferedPage, 1, () -> memoryManager.updateMemoryUsage(-bufferedPage.getRetainedSizeInBytes()))) .collect(toImmutableList()); // add pages to the buffer (this will increase the reference count by one) partitions.get(partitionNumber).enqueuePages(serializedPageReferences); // drop the initial reference serializedPageReferences.forEach(SerializedPageReference::dereferencePage); }
CompactibleTimelineObjectHolderCursor( VersionedIntervalTimeline<String, DataSegment> timeline, List<Interval> totalIntervalsToSearch ) { this.holders = totalIntervalsToSearch .stream() .flatMap(interval -> timeline .lookup(interval) .stream() .filter(holder -> { final List<PartitionChunk<DataSegment>> chunks = Lists.newArrayList(holder.getObject().iterator()); final long partitionBytes = chunks.stream().mapToLong(chunk -> chunk.getObject().getSize()).sum(); return chunks.size() > 0 && partitionBytes > 0 && interval.contains(chunks.get(0).getObject().getInterval()); }) ) .collect(Collectors.toList()); }
.map( v -> materializeAnyResult( proxySpi, v ) ).collect( Collectors.toList() ) ) ); .mapToObj( id -> (AnyValue) ValueUtils.fromNodeProxy( proxySpi.newNodeProxy( id ) ) ) .collect( Collectors.toList() ) ); .mapToObj( id -> (AnyValue) ValueUtils.fromRelationshipProxy( proxySpi.newRelationshipProxy( id ) ) ) .collect( Collectors.toList() ) ); long[] array = ((LongStream) anyValue).toArray(); return Values.longArray( array ); double[] array = ((DoubleStream) anyValue).toArray(); return Values.doubleArray( array ); return VirtualValues.fromList( ((IntStream) anyValue).mapToObj( i -> Values.booleanValue( i != 0 ) ).collect( Collectors.toList() ) );
oldLogDirs.stream().map(File::getName).collect(joining(",")), deadWorkerDirs.stream().map(File::getName).collect(joining(","))); numFilesCleaned += perWorkerDirCleanupMeta.stream().mapToInt(meta -> meta.deletedFiles).sum(); diskSpaceCleaned += perWorkerDirCleanupMeta.stream().mapToLong(meta -> meta.deletedSize).sum(); final DeletionMeta globalLogCleanupMeta = globalLogCleanup(maxSumWorkerLogsSizeMb * 1024 * 1024); numFilesCleaned += globalLogCleanupMeta.deletedFiles;
static String createSnippetFromObservations(Object o) { String snippet = "new " + o.getClass().getSimpleName() + "{ "; if (o instanceof int[]) { snippet += Arrays.stream((int[]) o).mapToObj(v -> v).map(Object::toString).collect(Collectors.joining(",")); } else if (o instanceof double[]) { snippet += Arrays.stream((double[]) o).mapToObj(v -> v).map(Object::toString).collect(Collectors.joining(",")); } else if (o instanceof long[]) { snippet += Arrays.stream((long[]) o).mapToObj(v -> v).map(Object::toString).collect(Collectors.joining(",")); } return snippet + "}"; }
@Test(dataProvider = "snapshot") public void snapshot(boolean ascending, int limit, long nanos, Function<Long, Long> transformer) { int count = 21; timerWheel.nanos = nanos; int expected = Math.min(limit, count); Comparator<Long> order = ascending ? Comparator.naturalOrder() : Comparator.reverseOrder(); List<Long> times = IntStream.range(0, count).mapToLong(i -> { long time = nanos + TimeUnit.SECONDS.toNanos(2 << i); timerWheel.schedule(new Timer(time)); return time; }).boxed().sorted(order).collect(toList()).subList(0, expected); when(transformer.apply(anyLong())).thenAnswer(invocation -> invocation.getArgument(0)); assertThat(snapshot(ascending, limit, transformer), is(times)); verify(transformer, times(expected)).apply(anyLong()); }
/** * Verifies that {@link LongPredicate} evaluates all the given values to {@code true}. * <p> * Example : * <pre><code class='java'> LongPredicate evenNumber = n -> n % 2 == 0; * * // assertion succeeds: * assertThat(evenNumber).accepts(2, 4, 6); * * // assertion fails because of 3: * assertThat(evenNumber).accepts(2, 3, 4);</code></pre> * * @param values values that the actual {@code Predicate} should accept. * @return this assertion object. * @throws AssertionError if the actual {@code Predicate} does not accept all given values. */ public LongPredicateAssert accepts(long... values) { if (values.length == 1) return acceptsInternal(values[0]); return acceptsAllInternal(LongStream.of(values).boxed().collect(Collectors.toList())); }
private List<Comparable> primitiveArrayToList(FieldSpec.DataType dataType, Object sortedValues) { List<Comparable> valueList; switch (dataType) { case INT: valueList = Arrays.stream((int[]) sortedValues).boxed().collect(Collectors.toList()); break; case LONG: valueList = Arrays.stream((long[]) sortedValues).boxed().collect(Collectors.toList()); break; case FLOAT: // Stream not available for float. valueList = new ArrayList<>(); for (float value : ((float[]) sortedValues)) { valueList.add(value); } break; case DOUBLE: valueList = Arrays.stream((double[]) sortedValues).boxed().collect(Collectors.toList()); break; default: throw new IllegalArgumentException("Illegal data type for mutable dictionary: " + dataType); } return valueList; } }
private void testAggregationBigints(InternalAggregationFunction function, Page page, double maxError, long... inputs) { // aggregate level assertAggregation(function, QDIGEST_EQUALITY, "test multiple positions", page, getExpectedValueLongs(maxError, inputs)); // test scalars List<Long> rows = Arrays.stream(inputs).sorted().boxed().collect(Collectors.toList()); SqlVarbinary returned = (SqlVarbinary) AggregationTestUtils.aggregation(function, page); assertPercentileWithinError(StandardTypes.BIGINT, returned, maxError, rows, 0.1, 0.5, 0.9, 0.99); }