Tabnine Logo
StringUtils.humanReadableInt
Code IndexAdd Tabnine to your IDE (free)

How to use
humanReadableInt
method
in
org.apache.hadoop.util.StringUtils

Best Java code snippets using org.apache.hadoop.util.StringUtils.humanReadableInt (Showing top 20 results out of 315)

origin: apache/hbase

private String fileSizeToString(long size) {
 return printSizeInBytes ? Long.toString(size) : StringUtils.humanReadableInt(size);
}
origin: apache/hbase

LOG.debug("export split=" + i + " size=" + StringUtils.humanReadableInt(sizeGroups[i]));
origin: apache/hbase

@Override
public void postAppend(final long size, final long time, final WALKey logkey,
  final WALEdit logEdit) throws IOException {
 source.incrementAppendCount();
 source.incrementAppendTime(time);
 source.incrementAppendSize(size);
 source.incrementWrittenBytes(size);
 if (time > 1000) {
  source.incrementSlowAppendCount();
  LOG.warn(String.format("%s took %d ms appending an edit to wal; len~=%s",
    Thread.currentThread().getName(),
    time,
    StringUtils.humanReadableInt(size)));
 }
}
origin: apache/hbase

LOG.info("Using bufferSize=" + StringUtils.humanReadableInt(bufferSize));
origin: apache/hbase

 final long inputFileSize)
 throws IOException {
final String statusMessage = "copied %s/" + StringUtils.humanReadableInt(inputFileSize) +
               " (%.1f%%)";
   context.getCounter(Counter.BYTES_COPIED).increment(reportBytes);
   context.setStatus(String.format(statusMessage,
            StringUtils.humanReadableInt(totalBytesWritten),
            (totalBytesWritten/(float)inputFileSize) * 100.0f) +
            " from " + inputPath + " to " + outputPath);
          StringUtils.humanReadableInt(totalBytesWritten),
          (totalBytesWritten/(float)inputFileSize) * 100.0f) +
          " from " + inputPath + " to " + outputPath);
   " (" + StringUtils.humanReadableInt(totalBytesWritten) + ")" +
   " time=" + StringUtils.formatTimeDiff(etime, stime) +
   String.format(" %.3fM/sec", (totalBytesWritten / ((etime - stime)/1000.0))/1048576.0));
origin: apache/hbase

+ numKeys
+ ", cols="
+ StringUtils.humanReadableInt(numCols.get())
+ ", time="
+ formatTime(time)
origin: apache/hbase

  + "(" + StringUtils.humanReadableInt(curSize) + ")");
int index = 0;
for (KeyValue kv : kvs) {
origin: apache/hbase

  + "(" + StringUtils.humanReadableInt(curSize) + ")");
int index = 0;
for (KeyValue kv : map) {
origin: apache/hbase

 protected HStoreFile createMockStoreFile(final long sizeInBytes, final long seqId) {
  HStoreFile mockSf = mock(HStoreFile.class);
  StoreFileReader reader = mock(StoreFileReader.class);
  String stringPath = "/hbase/testTable/regionA/" +
    RandomStringUtils.random(FILENAME_LENGTH, 0, 0, true, true, null, random);
  Path path = new Path(stringPath);


  when(reader.getSequenceID()).thenReturn(seqId);
  when(reader.getTotalUncompressedBytes()).thenReturn(sizeInBytes);
  when(reader.length()).thenReturn(sizeInBytes);

  when(mockSf.getPath()).thenReturn(path);
  when(mockSf.excludeFromMinorCompaction()).thenReturn(false);
  when(mockSf.isReference()).thenReturn(false); // TODO come back to
  // this when selection takes this into account
  when(mockSf.getReader()).thenReturn(reader);
  String toString = MoreObjects.toStringHelper("MockStoreFile")
    .add("isReference", false)
    .add("fileSize", StringUtils.humanReadableInt(sizeInBytes))
    .add("seqId", seqId)
    .add("path", stringPath).toString();
  when(mockSf.toString()).thenReturn(toString);

  return mockSf;
 }
}
origin: apache/hbase

LOG.info("Input split length: " + StringUtils.humanReadableInt(tSplit.getLength()) + " bytes.");
final TableRecordReader trr =
  this.tableRecordReader != null ? this.tableRecordReader : new TableRecordReader();
origin: apache/hbase

out.print( new Date(snapshotDesc.getCreationTime()) );
out.write("</td>\n      <td>");
out.print( StringUtils.humanReadableInt(stats.getSharedStoreFilesSize()) );
out.write("</td>\n      <td>");
out.print( StringUtils.humanReadableInt(stats.getMobStoreFilesSize())  );
out.write("</td>\n      <td>");
out.print( StringUtils.humanReadableInt(stats.getArchivedStoreFileSize()) );
out.write("\n        (");
out.print( StringUtils.humanReadableInt(stats.getNonSharedArchivedStoreFilesSize()) );
out.write(")</td>\n    </tr>\n    ");
out.print( snapshots.size() );
out.write(" snapshot(s) in set.</p>\n    <p>Total Storefile Size: ");
out.print( StringUtils.humanReadableInt(totalSize) );
out.write("</p>\n    <p>Total Shared Storefile Size: ");
out.print( StringUtils.humanReadableInt(totalSharedSize.get()) );
out.write(",\n       Total Mob Storefile Size: ");
out.print( StringUtils.humanReadableInt(totalMobSize.get()) );
out.write(",\n       Total Archived Storefile Size: ");
out.print( StringUtils.humanReadableInt(totalArchivedSize.get()) );
out.write("\n       (");
out.print( StringUtils.humanReadableInt(totalUnsharedArchivedSize) );
out.write(")</p>\n    <p>Shared Storefile Size is the Storefile size shared between snapshots and active tables.\n       Mob Storefile Size is the Mob Storefile size shared between snapshots and active tables.\n       Archived Storefile Size is the Storefile size in Archive.\n       The format of Archived Storefile Size is NNN(MMM). NNN is the total Storefile\n       size in Archive, MMM is the total Storefile size in Archive that is specific\n       to the snapshot (not shared with other snapshots and tables)</p>\n  </table>\n</div>\n\n");
org.apache.jasper.runtime.JspRuntimeLibrary.include(request, response, "footer.jsp", out, false);
origin: apache/hbase

+ rootLevelIndexPos + ", " + rootChunk.getNumEntries()
+ " root-level entries, " + totalNumEntries + " total entries, "
+ StringUtils.humanReadableInt(this.totalBlockOnDiskSize) +
" on-disk size, "
+ StringUtils.humanReadableInt(totalBlockUncompressedSize) +
" total uncompressed size.");
origin: apache/hbase

  + StringUtils.humanReadableInt(totalBytes) + ")");
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughput) + "B/s");
System.out.println("total rows  : " + numRows);
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughputRows) + " rows/s");
System.out.println("total cells : " + numCells);
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughputCells) + " cells/s");
origin: apache/hbase

  + StringUtils.humanReadableInt(totalBytes) + ")");
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughput) + "B/s");
System.out.println("total rows  : " + numRows);
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughputRows) + " rows/s");
System.out.println("total cells : " + numCells);
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughputCells) + " cells/s");
origin: apache/hbase

protected void testHdfsStreaming(Path filename) throws IOException {
 byte[] buf = new byte[1024];
 FileSystem fs = filename.getFileSystem(getConf());
 // read the file from start to finish
 Stopwatch fileOpenTimer = Stopwatch.createUnstarted();
 Stopwatch streamTimer = Stopwatch.createUnstarted();
 fileOpenTimer.start();
 FSDataInputStream in = fs.open(filename);
 fileOpenTimer.stop();
 long totalBytes = 0;
 streamTimer.start();
 while (true) {
  int read = in.read(buf);
  if (read < 0) {
   break;
  }
  totalBytes += read;
 }
 streamTimer.stop();
 double throughput = (double)totalBytes / streamTimer.elapsed(TimeUnit.SECONDS);
 System.out.println("HDFS streaming: ");
 System.out.println("total time to open: " +
  fileOpenTimer.elapsed(TimeUnit.MILLISECONDS) + " ms");
 System.out.println("total time to read: " + streamTimer.elapsed(TimeUnit.MILLISECONDS) + " ms");
 System.out.println("total bytes: " + totalBytes + " bytes ("
   + StringUtils.humanReadableInt(totalBytes) + ")");
 System.out.println("throghput  : " + StringUtils.humanReadableInt((long)throughput) + "B/s");
}
origin: apache/hbase

org.jamon.escaping.Escaping.NONE.write(org.jamon.emit.StandardEmitter.valueOf(peerConfig.getBandwidth() == 0? "UNLIMITED" : StringUtils.humanReadableInt(peerConfig.getBandwidth())), jamonWriter);
origin: apache/hbase

  + StringUtils.humanReadableInt(totalBytes) + ")");
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughput) + "B/s");
System.out.println("total rows  : " + numRows);
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughputRows) + " rows/s");
System.out.println("total cells : " + numCells);
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughputCells) + " cells/s");
origin: apache/hbase

  + StringUtils.humanReadableInt(totalBytes) + ")");
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughput) + "B/s");
System.out.println("total rows  : " + numRows);
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughputRows) + " rows/s");
System.out.println("total cells : " + numCells);
System.out.println("throughput  : " + StringUtils.humanReadableInt((long)throughputCells) + " cells/s");
origin: apache/hbase

  + StringUtils.humanReadableInt(newStoreMemstoreSize - storeMemstoreSize));
assertTrue(storeMemstoreSize > newStoreMemstoreSize);
origin: apache/hbase

  + StringUtils.humanReadableInt(newStoreMemstoreSize - storeMemstoreSize));
assertTrue(storeMemstoreSize > newStoreMemstoreSize);
verifyData(secondaryRegion, 0, lastReplayed+1, cq, families);
org.apache.hadoop.utilStringUtilshumanReadableInt

Javadoc

Given an integer, return a string that is in an approximate, but human readable format. It uses the bases 'k', 'm', and 'g' for 1024, 1024**2, and 1024**3.

Popular methods of StringUtils

  • stringifyException
    Make a string representation of the exception.
  • join
    Concatenates strings, using a separator.
  • split
  • arrayToString
  • toLowerCase
    Converts all of the characters in this String to lower case with Locale.ENGLISH.
  • escapeString
  • startupShutdownMessage
    Print a log message for starting up and shutting down
  • getStrings
    Returns an arraylist of strings.
  • toUpperCase
    Converts all of the characters in this String to upper case with Locale.ENGLISH.
  • byteToHexString
    Given an array of bytes it will convert the bytes to a hex string representation of the bytes
  • formatTime
    Given the time in long milliseconds, returns a String in the format Xhrs, Ymins, Z sec.
  • unEscapeString
  • formatTime,
  • unEscapeString,
  • getStringCollection,
  • byteDesc,
  • formatPercent,
  • getTrimmedStrings,
  • equalsIgnoreCase,
  • format,
  • formatTimeDiff,
  • getTrimmedStringCollection

Popular in Java

  • Parsing JSON documents to java classes using gson
  • putExtra (Intent)
  • onCreateOptionsMenu (Activity)
  • getApplicationContext (Context)
  • GridLayout (java.awt)
    The GridLayout class is a layout manager that lays out a container's components in a rectangular gri
  • Kernel (java.awt.image)
  • System (java.lang)
    Provides access to system-related information and resources including standard input and output. Ena
  • MalformedURLException (java.net)
    This exception is thrown when a program attempts to create an URL from an incorrect specification.
  • Permission (java.security)
    Legacy security code; do not use.
  • List (java.util)
    An ordered collection (also known as a sequence). The user of this interface has precise control ove
  • Top 25 Plugins for Webstorm
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimAtomGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyStudentsTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now