Tabnine Logo
NameNodeConnector.getLiveDatanodeStorageReport
Code IndexAdd Tabnine to your IDE (free)

How to use
getLiveDatanodeStorageReport
method
in
org.apache.hadoop.hdfs.server.balancer.NameNodeConnector

Best Java code snippets using org.apache.hadoop.hdfs.server.balancer.NameNodeConnector.getLiveDatanodeStorageReport (Showing top 5 results out of 315)

origin: org.apache.hadoop/hadoop-hdfs

@Override
public DatanodeStorageReport[] getLiveDatanodeStorageReport()
  throws IOException {
 return nnc.getLiveDatanodeStorageReport();
}
origin: org.apache.hadoop/hadoop-hdfs

/** Get live datanode storage reports and then build the network topology. */
public List<DatanodeStorageReport> init() throws IOException {
 final DatanodeStorageReport[] reports = nnc.getLiveDatanodeStorageReport();
 final List<DatanodeStorageReport> trimmed = new ArrayList<DatanodeStorageReport>(); 
 // create network topology and classify utilization collections:
 // over-utilized, above-average, below-average and under-utilized.
 for (DatanodeStorageReport r : DFSUtil.shuffle(reports)) {
  final DatanodeInfo datanode = r.getDatanodeInfo();
  if (shouldIgnore(datanode)) {
   continue;
  }
  trimmed.add(r);
  cluster.add(datanode);
 }
 return trimmed;
}
origin: org.apache.hadoop/hadoop-hdfs

/**
 * getNodes function returns a list of DiskBalancerDataNodes.
 *
 * @return Array of DiskBalancerDataNodes
 */
@Override
public List<DiskBalancerDataNode> getNodes() throws Exception {
 Preconditions.checkNotNull(this.connector);
 List<DiskBalancerDataNode> nodeList = new LinkedList<>();
 DatanodeStorageReport[] reports = this.connector
   .getLiveDatanodeStorageReport();
 for (DatanodeStorageReport report : reports) {
  DiskBalancerDataNode datanode = getBalancerNodeFromDataNode(
    report.getDatanodeInfo());
  getVolumeInfoFromStorageReports(datanode, report.getStorageReports());
  nodeList.add(datanode);
 }
 return nodeList;
}
origin: ch.cern.hadoop/hadoop-hdfs

/** Get live datanode storage reports and then build the network topology. */
public List<DatanodeStorageReport> init() throws IOException {
 final DatanodeStorageReport[] reports = nnc.getLiveDatanodeStorageReport();
 final List<DatanodeStorageReport> trimmed = new ArrayList<DatanodeStorageReport>(); 
 // create network topology and classify utilization collections:
 // over-utilized, above-average, below-average and under-utilized.
 for (DatanodeStorageReport r : DFSUtil.shuffle(reports)) {
  final DatanodeInfo datanode = r.getDatanodeInfo();
  if (shouldIgnore(datanode)) {
   continue;
  }
  trimmed.add(r);
  cluster.add(datanode);
 }
 return trimmed;
}
origin: io.prestosql.hadoop/hadoop-apache

/** Get live datanode storage reports and then build the network topology. */
public List<DatanodeStorageReport> init() throws IOException {
 final DatanodeStorageReport[] reports = nnc.getLiveDatanodeStorageReport();
 final List<DatanodeStorageReport> trimmed = new ArrayList<DatanodeStorageReport>(); 
 // create network topology and classify utilization collections:
 // over-utilized, above-average, below-average and under-utilized.
 for (DatanodeStorageReport r : DFSUtil.shuffle(reports)) {
  final DatanodeInfo datanode = r.getDatanodeInfo();
  if (shouldIgnore(datanode)) {
   continue;
  }
  trimmed.add(r);
  cluster.add(datanode);
 }
 return trimmed;
}
org.apache.hadoop.hdfs.server.balancerNameNodeConnectorgetLiveDatanodeStorageReport

Popular methods of NameNodeConnector

  • newNameNodeConnectors
  • <init>
  • checkAndMarkRunning
    The idea for making sure that there is no more than one instance running in an HDFS is to create a f
  • getBlockpoolID
  • getBlocks
  • getBytesMoved
  • getDistributedFileSystem
  • getKeyManager
  • getTargetPaths
  • isUpgrading
  • shouldContinue
    Should the instance continue running?
  • setWrite2IdFile
  • shouldContinue,
  • setWrite2IdFile,
  • close,
  • getFallbackToSimpleAuth,
  • getNNProtocolConnection

Popular in Java

  • Parsing JSON documents to java classes using gson
  • scheduleAtFixedRate (ScheduledExecutorService)
  • runOnUiThread (Activity)
  • requestLocationUpdates (LocationManager)
  • Menu (java.awt)
  • BufferedInputStream (java.io)
    A BufferedInputStream adds functionality to another input stream-namely, the ability to buffer the i
  • BufferedReader (java.io)
    Wraps an existing Reader and buffers the input. Expensive interaction with the underlying reader is
  • File (java.io)
    An "abstract" representation of a file system entity identified by a pathname. The pathname may be a
  • Executors (java.util.concurrent)
    Factory and utility methods for Executor, ExecutorService, ScheduledExecutorService, ThreadFactory,
  • Handler (java.util.logging)
    A Handler object accepts a logging request and exports the desired messages to a target, for example
  • Top plugins for WebStorm
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now