static DistributedLogClientBuilder createDistributedLogClientBuilder(ServerSet serverSet) { return DistributedLogClientBuilder.newBuilder() .name("rebalancer_tool") .clientId(ClientId$.MODULE$.apply("rebalancer_tool")) .maxRedirects(2) .serverSet(serverSet) .clientBuilder(ClientBuilder.get() .connectionTimeout(Duration.fromSeconds(2)) .tcpConnectTimeout(Duration.fromSeconds(2)) .requestTimeout(Duration.fromSeconds(10)) .hostConnectionLimit(1) .hostConnectionCoresize(1) .keepAlive(true) .failFast(false)); }
@SuppressWarnings("unchecked") private ClientBuilder setDefaultSettings(ClientBuilder builder) { return builder.name(clientName) .codec(ThriftClientFramedCodec.apply(Option.apply(clientId))) .failFast(false) .noFailureAccrual() // disable retries on finagle client builder, as there is only one host per finagle client // we should throw exception immediately on first failure, so DL client could quickly detect // failures and retry other proxies. .retries(1) .keepAlive(true); }
@Override @SuppressWarnings("unchecked") public ProxyClient build(SocketAddress address) { Service<ThriftClientRequest, byte[]> client = ClientBuilder.safeBuildFactory( clientBuilder .hosts((InetSocketAddress) address) .reportTo(clientStats.getFinagleStatsReceiver(address)) ).toService(); DistributedLogService.ServiceIface service = new DistributedLogService.ServiceToClient(client, new TBinaryProtocol.Factory()); return new ProxyClient(address, client, service); }
private ClientBuilder getDefaultClientBuilder() { return ClientBuilder.get() .hostConnectionLimit(1) .tcpConnectTimeout(Duration.fromMilliseconds(200)) .connectTimeout(Duration.fromMilliseconds(200)) .requestTimeout(Duration.fromSeconds(1)); }
@Override public Service<Req, Res> decorate(InetSocketAddress addr) { if (addr == null) { throw new IllegalArgumentException("address is null"); } ThriftClientFramedCodecFactory codec = new ThriftClientFramedCodecFactory(ClientId.current(), false, new TCompactProtocol.Factory()); Stopwatch sw = new Stopwatch(); sw.start(); Service<ThriftClientRequest, byte[]> client = ClientBuilder.safeBuild(ClientBuilder.get().hosts(addr) .codec(codec) .requestTimeout(timeout) .hostConnectionLimit(numThreads)); sw.stop(); logger.info(String.format("building finagle client took %s ms", sw.elapsedMillis())); return svc.wrap(client); }
/** * initialise a {@link Client} instance using {@link ServiceFactory} from a * {@link ClientBuilder} * * @param addr */ public SimpleKestrelClient(InetSocketAddress addr) { final ClientBuilder<Command, Response, Yes, Yes, Yes> builder = ClientBuilder.get() .codec(Kestrel.get()) .hosts(addr) .hostConnectionLimit(1); final ServiceFactory<Command, Response> kestrelClientBuilder = ClientBuilder.safeBuildFactory(builder); client = Client.newInstance(kestrelClientBuilder); }
.streamNameRegex(streamRegex) .handshakeWithClientInfo(handshakeWithClientInfo) .clientBuilder(ClientBuilder.get() .connectTimeout(Duration.fromSeconds(1)) .tcpConnectTimeout(Duration.fromSeconds(1)) .requestTimeout(Duration.fromSeconds(2)) .hostConnectionLimit(2) .hostConnectionCoresize(2) .keepAlive(true) .failFast(false)) .statsReceiver(monitorReceiver.scope("client")) .buildMonitorClient();
private DistributedLogClient buildDlogClient() { ClientBuilder clientBuilder = ClientBuilder.get() .hostConnectionLimit(hostConnectionLimit) .hostConnectionCoresize(hostConnectionCoreSize) .tcpConnectTimeout(Duration$.MODULE$.fromMilliseconds(200)) .connectTimeout(Duration$.MODULE$.fromMilliseconds(200)) .requestTimeout(Duration$.MODULE$.fromSeconds(10)) .sendBufferSize(sendBufferSize) .recvBufferSize(recvBufferSize);
public PinLaterClient(String host, int port, int concurrency) { this.service = ClientBuilder.safeBuild( ClientBuilder.get() .hosts(new InetSocketAddress(host, port)) .codec(ThriftClientFramedCodec.apply(Option.apply(new ClientId("pinlaterclient")))) .hostConnectionLimit(concurrency) .tcpConnectTimeout(Duration.apply(2, TimeUnit.SECONDS)) .requestTimeout(Duration.apply(10, TimeUnit.SECONDS)) .retries(1)); this.iface = new PinLater.ServiceToClient(service, new TBinaryProtocol.Factory()); }
ClientBuilder builder = this.clientBuilder; if (null == builder) { builder = ClientBuilder.get() .tcpConnectTimeout(Duration.fromMilliseconds(200)) .connectTimeout(Duration.fromMilliseconds(200)) .requestTimeout(Duration.fromSeconds(1)) .retries(20); if (!clientConfig.getThriftMux()) { builder = builder.hostConnectionLimit(1); builder = builder.stack(ThriftMux.client().withClientId(clientId)); } else { builder = builder.codec(ThriftClientFramedCodec.apply(Option.apply(clientId))); ClientBuilder.safeBuildFactory( builder.dest(name).reportTo(statsReceiver.scope("routing")) ).toService(); DistributedLogService.ServiceIface service =
public PinLaterClient(ServerSet serverSet, int concurrency) { ZookeeperServerSetCluster cluster = new ZookeeperServerSetCluster(serverSet); ClientBuilder builder = ClientBuilder.get().cluster(cluster); this.service = ClientBuilder.safeBuild( builder.codec(ThriftClientFramedCodec.get()) .tcpConnectTimeout(Duration.apply(2, TimeUnit.SECONDS)) .requestTimeout(Duration.apply(10, TimeUnit.SECONDS)) .hostConnectionLimit(concurrency)); this.iface = new PinLater.ServiceToClient(service, new TBinaryProtocol.Factory()); }
.clientBuilder(ClientBuilder.get() .hostConnectionLimit(10) .hostConnectionCoresize(10) .tcpConnectTimeout(Duration$.MODULE$.fromSeconds(1)) .requestTimeout(Duration$.MODULE$.fromSeconds(2))) .redirectBackoffStartMs(100) .redirectBackoffMaxMs(500)
@SuppressWarnings("unchecked") private ClientBuilder configureThriftMux(ClientBuilder builder, ClientId clientId, ClientConfig clientConfig) { if (clientConfig.getThriftMux()) { return builder.stack(ThriftMux.client().withClientId(clientId)); } else { return builder.codec(ThriftClientFramedCodec.apply(Option.apply(clientId))); } }
public static void main(String[] args) throws Exception { Service<Command, Response> service = ClientBuilder.safeBuild( ClientBuilder .get() .hosts("localhost:11211") .hostConnectionLimit(1) .codec(new Memcached())); Client client = Client.newInstance(service); testClient(client); // cache client with cluster CachePoolCluster cluster = CachePoolClusterUtil.newStaticCluster( ImmutableSet.of(new CacheNode("localhost", 11211, 1))); ClientBuilder builder = ClientBuilder.get().codec(new Memcached(null)); com.twitter.finagle.memcachedx.Client memcachedClient = KetamaClientBuilder.get() .cachePoolCluster(cluster) .clientBuilder(builder) .build(); client = new ClientBase(memcachedClient); testClient(client); }
@Test(timeout = 60000) public void testBuildClientsFromSameBuilder() throws Exception { DistributedLogClientBuilder builder = DistributedLogClientBuilder.newBuilder() .name("build-clients-from-same-builder") .clientId(ClientId$.MODULE$.apply("test-builder")) .finagleNameStr("inet!127.0.0.1:7001") .streamNameRegex(".*") .handshakeWithClientInfo(true) .clientBuilder(ClientBuilder.get() .hostConnectionLimit(1) .connectTimeout(Duration.fromSeconds(1)) .tcpConnectTimeout(Duration.fromSeconds(1)) .requestTimeout(Duration.fromSeconds(10))); DistributedLogClient client1 = builder.build(); DistributedLogClient client2 = builder.build(); assertFalse(client1 == client2); } }
/** * initialise a {@link Client} instance using {@link ServiceFactory} from a * {@link ClientBuilder} * * @param addr */ public SimpleKestrelClient(InetSocketAddress addr) { final ClientBuilder<Command, Response, Yes, Yes, Yes> builder = ClientBuilder.get() .codec(Kestrel.get()) .hosts(addr) .hostConnectionLimit(1); final ServiceFactory<Command, Response> kestrelClientBuilder = ClientBuilder.safeBuildFactory(builder); client = Client.newInstance(kestrelClientBuilder); }
System.arraycopy(serverSets, 1, remotes, 0, remotes.length); ClientBuilder finagleClientBuilder = ClientBuilder.get() .connectTimeout(Duration.fromSeconds(1)) .tcpConnectTimeout(Duration.fromSeconds(1)) .requestTimeout(Duration.fromSeconds(2)) .keepAlive(true) .failFast(false); .hostConnectionLimit(2) .hostConnectionCoresize(2);
@Override protected int runCmd(CommandLine commandLine) throws Exception { try { parseCommandLine(commandLine); } catch (ParseException pe) { System.err.println("ERROR: failed to parse commandline : '" + pe.getMessage() + "'"); printUsage(); return -1; } DistributedLogClientBuilder clientBuilder = DistributedLogClientBuilder.newBuilder() .name("proxy_tool") .clientId(ClientId$.MODULE$.apply("proxy_tool")) .maxRedirects(2) .host(address) .clientBuilder(ClientBuilder.get() .connectionTimeout(Duration.fromSeconds(2)) .tcpConnectTimeout(Duration.fromSeconds(2)) .requestTimeout(Duration.fromSeconds(10)) .hostConnectionLimit(1) .hostConnectionCoresize(1) .keepAlive(true) .failFast(false)); Pair<DistributedLogClient, MonitorServiceClient> clientPair = ClientUtils.buildClient(clientBuilder); try { return runCmd(clientPair); } finally { clientPair.getLeft().close(); } }
public static void main(String[] args) { Service<Command, Response> service = ClientBuilder.safeBuild( ClientBuilder .get() .hosts("localhost:11211") .hostConnectionLimit(1) .codec(new Memcached())); Client client = Client.newInstance(service); testClient(client); // cache client with cluster CachePoolCluster cluster = CachePoolClusterUtil.newStaticCluster( ImmutableSet.of(new CacheNode("localhost", 11211, 1))); ClientBuilder builder = ClientBuilder.get().codec(new Memcached(null)); com.twitter.finagle.memcached.Client memcachedClient = KetamaClientBuilder.get() .cachePoolCluster(cluster) .clientBuilder(builder) .build(); client = new ClientBase(memcachedClient); testClient(client); }