private boolean equivalentEncryptionZones(EncryptionZone zone1, EncryptionZone zone2) { if (zone1 == null && zone2 == null) { return true; } else if (zone1 == null || zone2 == null) { return false; } return zone1.equals(zone2); }
/** * Lists all (enabled, disabled and removed) erasure coding policies registered in HDFS. * @return a list of erasure coding policies */ @Override public List<HdfsFileErasureCodingPolicy> getAllErasureCodingPolicies() throws IOException { ErasureCodingPolicyInfo[] erasureCodingPolicies = hdfsAdmin.getErasureCodingPolicies(); List<HdfsFileErasureCodingPolicy> policies = new ArrayList<>(erasureCodingPolicies.length); for (ErasureCodingPolicyInfo erasureCodingPolicy : erasureCodingPolicies) { policies.add(new HdfsFileErasureCodingPolicyImpl(erasureCodingPolicy.getPolicy().getName(), erasureCodingPolicy.getState().toString())); } return policies; }
static Encryptor createEncryptor(Configuration conf, HdfsFileStatus stat, DFSClient client) throws IOException { FileEncryptionInfo feInfo = stat.getFileEncryptionInfo(); if (feInfo == null) { return null; } return TRANSPARENT_CRYPTO_HELPER.createEncryptor(conf, feInfo, client); } }
BlockConstructionStage stage, DataChecksum summer, EventLoopGroup eventLoopGroup, Class<? extends Channel> channelClass) { Enum<?>[] storageTypes = locatedBlock.getStorageTypes(); DatanodeInfo[] datanodeInfos = locatedBlock.getLocations(); boolean connectToDnViaHostname = conf.getBoolean(DFS_CLIENT_USE_DN_HOSTNAME, DFS_CLIENT_USE_DN_HOSTNAME_DEFAULT); int timeoutMs = conf.getInt(DFS_CLIENT_SOCKET_TIMEOUT_KEY, READ_TIMEOUT); ExtendedBlock blockCopy = new ExtendedBlock(locatedBlock.getBlock()); blockCopy.setNumBytes(locatedBlock.getBlockSize()); ClientOperationHeaderProto header = ClientOperationHeaderProto.newBuilder() .setBaseHeader(BaseHeaderProto.newBuilder().setBlock(PB_HELPER.convert(blockCopy)) .setToken(PB_HELPER.convert(locatedBlock.getBlockToken()))) .setClientName(clientName).build(); ChecksumProto checksumProto = DataTransferProtoUtil.toProto(summer); OpWriteBlockProto.Builder writeBlockProtoBuilder = OpWriteBlockProto.newBuilder() .setHeader(header).setStage(OpWriteBlockProto.BlockConstructionStage.valueOf(stage.name())) .setPipelineSize(1).setMinBytesRcvd(locatedBlock.getBlock().getNumBytes()) .setMaxBytesRcvd(maxBytesRcvd).setLatestGenerationStamp(latestGS) .setRequestedChecksum(checksumProto) .setCachingStrategy(CachingStrategyProto.newBuilder().setDropBehind(true).build()); List<Future<Channel>> futureList = new ArrayList<>(datanodeInfos.length); for (int i = 0; i < datanodeInfos.length; i++) { Promise<Channel> promise = eventLoopGroup.next().newPromise(); futureList.add(promise); String dnAddr = dnInfo.getXferAddr(connectToDnViaHostname); new Bootstrap().group(eventLoopGroup).channel(channelClass) .option(CONNECT_TIMEOUT_MILLIS, timeoutMs).handler(new ChannelInitializer<Channel>() {
private void testFromDFS(DistributedFileSystem dfs, String src, int repCount, String localhost) throws Exception { // Multiple times as the order is random for (int i = 0; i < 10; i++) { LocatedBlocks l; // The NN gets the block list asynchronously, so we may need multiple tries to get the list final long max = System.currentTimeMillis() + 10000; boolean done; do { Assert.assertTrue("Can't get enouth replica.", System.currentTimeMillis() < max); l = getNamenode(dfs.getClient()).getBlockLocations(src, 0, 1); Assert.assertNotNull("Can't get block locations for " + src, l); Assert.assertNotNull(l.getLocatedBlocks()); Assert.assertTrue(l.getLocatedBlocks().size() > 0); done = true; for (int y = 0; y < l.getLocatedBlocks().size() && done; y++) { done = (l.get(y).getLocations().length == repCount); } } while (!done); for (int y = 0; y < l.getLocatedBlocks().size() && done; y++) { Assert.assertEquals(localhost, l.get(y).getLocations()[repCount - 1].getHostName()); } } }
@Override public void reorderBlocks(Configuration c, LocatedBlocks lbs, String src) { for (LocatedBlock lb : lbs.getLocatedBlocks()) { if (lb.getLocations().length > 1) { DatanodeInfo[] infos = lb.getLocations(); if (infos[0].getHostName().equals(lookup)) { LOG.info("HFileSystem bad host, inverting"); DatanodeInfo tmp = infos[0]; infos[0] = infos[1]; infos[1] = tmp; } } } } }));
@Override public List<HdfsFileStatusWithId> listLocatedHdfsStatus( FileSystem fs, Path p, PathFilter filter) throws IOException { DistributedFileSystem dfs = ensureDfs(fs); DFSClient dfsc = dfs.getClient(); final String src = p.toUri().getPath(); DirectoryListing current = dfsc.listPaths(src, org.apache.hadoop.hdfs.protocol.HdfsFileStatus.EMPTY_NAME, true); if (current == null) { // the directory does not exist throw new FileNotFoundException("File " + p + " does not exist."); } final URI fsUri = fs.getUri(); List<HdfsFileStatusWithId> result = new ArrayList<HdfsFileStatusWithId>( current.getPartialListing().length); while (current != null) { org.apache.hadoop.hdfs.protocol.HdfsFileStatus[] hfss = current.getPartialListing(); for (int i = 0; i < hfss.length; ++i) { HdfsLocatedFileStatus next = (HdfsLocatedFileStatus)(hfss[i]); if (filter != null) { Path filterPath = next.getFullPath(p).makeQualified(fsUri, null); if (!filter.accept(filterPath)) continue; } LocatedFileStatus lfs = next.makeQualifiedLocated(fsUri, p); result.add(new HdfsFileStatusWithIdImpl(lfs, next.getFileId())); } current = current.hasMore() ? dfsc.listPaths(src, current.getLastName(), true) : null; } return result; }
unspecifiedStoragePolicyId = idUnspecified.getByte(BlockStoragePolicySuite.class); byte storagePolicyId = status.getStoragePolicy(); if (storagePolicyId != unspecifiedStoragePolicyId) { BlockStoragePolicy[] policies = dfs.getStoragePolicies(); for (BlockStoragePolicy policy : policies) { if (policy.getId() == storagePolicyId) { return policy.getName();
FanOutOneBlockAsyncDFSOutput(Configuration conf, FSUtils fsUtils, DistributedFileSystem dfs, DFSClient client, ClientProtocol namenode, String clientName, String src, long fileId, LocatedBlock locatedBlock, Encryptor encryptor, List<Channel> datanodeList, DataChecksum summer, ByteBufAllocator alloc) { this.conf = conf; this.fsUtils = fsUtils; this.dfs = dfs; this.client = client; this.namenode = namenode; this.fileId = fileId; this.clientName = clientName; this.src = src; this.block = locatedBlock.getBlock(); this.locations = locatedBlock.getLocations(); this.encryptor = encryptor; this.datanodeList = datanodeList; this.summer = summer; this.maxDataLen = MAX_DATA_LEN - (MAX_DATA_LEN % summer.getBytesPerChecksum()); this.alloc = alloc; this.buf = alloc.directBuffer(sendBufSizePRedictor.initialSize()); this.state = State.STREAMING; setupReceiver(conf.getInt(DFS_CLIENT_SOCKET_TIMEOUT_KEY, READ_TIMEOUT)); }
public ServerName[] getDataNodes() throws IOException { DistributedFileSystem fs = (DistributedFileSystem) FSUtils.getRootDir(getConf()) .getFileSystem(getConf()); DFSClient dfsClient = fs.getClient(); List<ServerName> hosts = new LinkedList<>(); for (DatanodeInfo dataNode: dfsClient.datanodeReport(HdfsConstants.DatanodeReportType.LIVE)) { hosts.add(ServerName.valueOf(dataNode.getHostName(), -1, -1)); } return hosts.toArray(new ServerName[hosts.size()]); } }
@Override public long getFileId(FileSystem fs, String path) throws IOException { return ensureDfs(fs).getClient().getFileInfo(path).getFileId(); }
@Test public void testLogRollOnDatanodeDeath() throws IOException, InterruptedException { dfsCluster.startDataNodes(TEST_UTIL.getConfiguration(), 3, true, null, null); tableName = getName(); Table table = createTestTable(tableName); TEST_UTIL.waitUntilAllRegionsAssigned(table.getName()); doPut(table, 1); server = TEST_UTIL.getRSForFirstRegionInTable(table.getName()); RegionInfo hri = server.getRegions(table.getName()).get(0).getRegionInfo(); AsyncFSWAL wal = (AsyncFSWAL) server.getWAL(hri); int numRolledLogFiles = AsyncFSWALProvider.getNumRolledLogFiles(wal); DatanodeInfo[] dnInfos = wal.getPipeline(); DataNodeProperties dnProp = TEST_UTIL.getDFSCluster().stopDataNode(dnInfos[0].getName()); TEST_UTIL.getDFSCluster().restartDataNode(dnProp); doPut(table, 2); assertEquals(numRolledLogFiles + 1, AsyncFSWALProvider.getNumRolledLogFiles(wal)); } }
static void completeFile(DFSClient client, ClientProtocol namenode, String src, String clientName, ExtendedBlock block, long fileId) { for (int retry = 0;; retry++) { try { if (namenode.complete(src, clientName, block, fileId)) { endFileLease(client, fileId); return; } else { LOG.warn("complete file " + src + " not finished, retry = " + retry); } } catch (RemoteException e) { IOException ioe = e.unwrapRemoteException(); if (ioe instanceof LeaseExpiredException) { LOG.warn("lease for file " + src + " is expired, give up", e); return; } else { LOG.warn("complete file " + src + " failed, retry = " + retry, e); } } catch (Exception e) { LOG.warn("complete file " + src + " failed, retry = " + retry, e); } sleepIgnoreInterrupt(retry); } }
/** * End the current block and complete file at namenode. You should call * {@link #recoverAndClose(CancelableProgressable)} if this method throws an exception. */ @Override public void close() throws IOException { endBlock(); state = State.CLOSED; datanodeList.forEach(ch -> ch.close()); datanodeList.forEach(ch -> ch.closeFuture().awaitUninterruptibly()); block.setNumBytes(ackedBlockLength); completeFile(client, namenode, src, clientName, block, fileId); }
/** * Compares two encryption key strengths. * * @param zone1 First EncryptionZone to compare * @param zone2 Second EncryptionZone to compare * @return 1 if zone1 is stronger; 0 if zones are equal; -1 if zone1 is weaker. * @throws IOException If an error occurred attempting to get key metadata */ private int compareKeyStrength(EncryptionZone zone1, EncryptionZone zone2) throws IOException { // zone1, zone2 should already have been checked for nulls. assert zone1 != null && zone2 != null : "Neither EncryptionZone under comparison can be null."; CipherSuite suite1 = zone1.getSuite(); CipherSuite suite2 = zone2.getSuite(); if (suite1 == null && suite2 == null) { return 0; } else if (suite1 == null) { return -1; } else if (suite2 == null) { return 1; } return Integer.compare(suite1.getAlgorithmBlockSize(), suite2.getAlgorithmBlockSize()); } }
/** * Get details of the erasure coding policy of a file or directory at the specified path. * @param path an hdfs file or directory * @return an erasure coding policy */ @Override public HdfsFileErasureCodingPolicy getErasureCodingPolicy(Path path) throws IOException { ErasureCodingPolicy erasureCodingPolicy = hdfsAdmin.getErasureCodingPolicy(path); if (erasureCodingPolicy == null) { return null; } return new HdfsFileErasureCodingPolicyImpl(erasureCodingPolicy.getName()); }
final long max = System.currentTimeMillis() + 10000; do { l = getNamenode(dfs.getClient()).getBlockLocations(fileName, 0, 1); Assert.assertNotNull(l.getLocatedBlocks()); Assert.assertEquals(1, l.getLocatedBlocks().size()); Assert.assertTrue("Expecting " + repCount + " , got " + l.get(0).getLocations().length, System.currentTimeMillis() < max); } while (l.get(0).getLocations().length != repCount); Object originalList [] = l.getLocatedBlocks().toArray(); HFileSystem.ReorderWALBlocks lrb = new HFileSystem.ReorderWALBlocks(); lrb.reorderBlocks(conf, l, fileName); Assert.assertArrayEquals(originalList, l.getLocatedBlocks().toArray()); Assert.assertEquals(host1, l.get(0).getLocations()[2].getHostName()); Assert.assertEquals(host1, l.get(0).getLocations()[2].getHostName());
for (LocatedBlock lb : lbs.getLocatedBlocks()) { DatanodeInfo[] dnis = lb.getLocations(); if (dnis != null && dnis.length > 1) { boolean found = false; for (int i = 0; i < dnis.length - 1 && !found; i++) { if (hostName.equals(dnis[i].getHostName())) {
private String getStoragePolicyNameForOldHDFSVersion(FileSystem fs, Path path) { try { if (fs instanceof DistributedFileSystem) { DistributedFileSystem dfs = (DistributedFileSystem) fs; HdfsFileStatus status = dfs.getClient().getFileInfo(path.toUri().getPath()); if (null != status) { byte storagePolicyId = status.getStoragePolicy(); Field idUnspecified = BlockStoragePolicySuite.class.getField("ID_UNSPECIFIED"); if (storagePolicyId != idUnspecified.getByte(BlockStoragePolicySuite.class)) { BlockStoragePolicy[] policies = dfs.getStoragePolicies(); for (BlockStoragePolicy policy : policies) { if (policy.getId() == storagePolicyId) { return policy.getName(); } } } } } } catch (Throwable e) { LOG.warn("failed to get block storage policy of [" + path + "]", e); } return null; }
public static long getFileId(FileSystem fs, String path) throws IOException { return ensureDfs(fs).getClient().getFileInfo(path).getFileId(); }