ExtendedBlock blockCopy = new ExtendedBlock(locatedBlock.getBlock()); blockCopy.setNumBytes(locatedBlock.getBlockSize()); ClientOperationHeaderProto header = ClientOperationHeaderProto.newBuilder() .setBaseHeader(BaseHeaderProto.newBuilder().setBlock(PB_HELPER.convert(blockCopy)) .setToken(PB_HELPER.convert(locatedBlock.getBlockToken())))
/** * <code>required .hadoop.hdfs.ClientOperationHeaderProto header = 1;</code> */ public Builder mergeHeader(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto value) { if (headerBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001) && header_ != org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.getDefaultInstance()) { header_ = org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.newBuilder(header_).mergeFrom(value).buildPartial(); } else { header_ = value; } onChanged(); } else { headerBuilder_.mergeFrom(value); } bitField0_ |= 0x00000001; return this; } /**
/** * <code>required .hadoop.hdfs.ClientOperationHeaderProto header = 1;</code> */ public Builder mergeHeader(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto value) { if (headerBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001) && header_ != org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.getDefaultInstance()) { header_ = org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.newBuilder(header_).mergeFrom(value).buildPartial(); } else { header_ = value; } onChanged(); } else { headerBuilder_.mergeFrom(value); } bitField0_ |= 0x00000001; return this; } /**
/** * <code>required .hadoop.hdfs.ClientOperationHeaderProto header = 1;</code> */ public Builder mergeHeader(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto value) { if (headerBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001) && header_ != org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.getDefaultInstance()) { header_ = org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.newBuilder(header_).mergeFrom(value).buildPartial(); } else { header_ = value; } onChanged(); } else { headerBuilder_.mergeFrom(value); } bitField0_ |= 0x00000001; return this; } /**
/** * <code>required .hadoop.hdfs.ClientOperationHeaderProto header = 1;</code> */ public Builder mergeHeader(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto value) { if (headerBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001) && header_ != org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.getDefaultInstance()) { header_ = org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.newBuilder(header_).mergeFrom(value).buildPartial(); } else { header_ = value; } onChanged(); } else { headerBuilder_.mergeFrom(value); } bitField0_ |= 0x00000001; return this; } /**
/** * <code>required .hadoop.hdfs.ClientOperationHeaderProto header = 1;</code> */ public Builder mergeHeader(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto value) { if (headerBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001) && header_ != org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.getDefaultInstance()) { header_ = org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.newBuilder(header_).mergeFrom(value).buildPartial(); } else { header_ = value; } onChanged(); } else { headerBuilder_.mergeFrom(value); } bitField0_ |= 0x00000001; return this; } /**
/** * <code>required .hadoop.hdfs.ClientOperationHeaderProto header = 1;</code> */ public Builder mergeHeader(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto value) { if (headerBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001) && header_ != org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.getDefaultInstance()) { header_ = org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.newBuilder(header_).mergeFrom(value).buildPartial(); } else { header_ = value; } onChanged(); } else { headerBuilder_.mergeFrom(value); } bitField0_ |= 0x00000001; return this; } /**
/** * <code>required .hadoop.hdfs.ClientOperationHeaderProto header = 1;</code> */ public Builder mergeHeader(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto value) { if (headerBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001) && header_ != org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.getDefaultInstance()) { header_ = org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto.newBuilder(header_).mergeFrom(value).buildPartial(); } else { header_ = value; } onChanged(); } else { headerBuilder_.mergeFrom(value); } bitField0_ |= 0x00000001; return this; } /**
public Builder toBuilder() { return newBuilder(this); }
public Builder newBuilderForType() { return newBuilder(); } public static Builder newBuilder(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto prototype) {
public Builder newBuilderForType() { return newBuilder(); } public static Builder newBuilder(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto prototype) {
public Builder toBuilder() { return newBuilder(this); }
public Builder toBuilder() { return newBuilder(this); }
public static Builder newBuilder(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); }
public static Builder newBuilder(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); }
public Builder newBuilderForType() { return newBuilder(); } public static Builder newBuilder(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto prototype) {
static ClientOperationHeaderProto buildClientHeader(ExtendedBlock blk, String client, Token<BlockTokenIdentifier> blockToken) { return ClientOperationHeaderProto.newBuilder() .setBaseHeader(buildBaseHeader(blk, blockToken)) .setClientName(client) .build(); }
public static Builder newBuilder(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); }
static ClientOperationHeaderProto buildClientHeader(ExtendedBlock blk, String client, Token<BlockTokenIdentifier> blockToken) { ClientOperationHeaderProto header = ClientOperationHeaderProto.newBuilder() .setBaseHeader(buildBaseHeader(blk, blockToken)) .setClientName(client) .build(); return header; }
static ClientOperationHeaderProto buildClientHeader(ExtendedBlock blk, String client, Token<BlockTokenIdentifier> blockToken) { ClientOperationHeaderProto header = ClientOperationHeaderProto.newBuilder() .setBaseHeader(buildBaseHeader(blk, blockToken)) .setClientName(client) .build(); return header; }