"Asked to rebuild %s %s.%s but I don't know keyspace %s", targetType, targetKeyspace, targetName, targetKeyspace)); metadata.cluster.submitSchemaRefresh(null, null, null, null); } else { switch (targetType) {
case CREATED: case UPDATED: submitSchemaRefresh( scc.targetType, scc.targetKeyspace, scc.targetName, scc.targetSignature); break;
if (refreshSchema) { schemaReady = submitSchemaRefresh(targetType, targetKeyspace, targetName, targetSignature);
case CREATED: case UPDATED: submitSchemaRefresh(scc.targetType, scc.targetKeyspace, scc.targetName, scc.targetSignature); break; case DROPPED:
case CREATED: case UPDATED: submitSchemaRefresh(scc.targetType, scc.targetKeyspace, scc.targetName, scc.targetSignature); break; case DROPPED:
case CREATED: if (scc.table.isEmpty()) submitSchemaRefresh(null, null); else submitSchemaRefresh(scc.keyspace, null); break; case DROPPED: if (scc.table.isEmpty()) submitSchemaRefresh(null, null); else submitSchemaRefresh(scc.keyspace, null); break; case UPDATED: if (scc.table.isEmpty()) submitSchemaRefresh(scc.keyspace, null); else submitSchemaRefresh(scc.keyspace, scc.table); break;
case CREATED: case UPDATED: submitSchemaRefresh(scc.targetType, scc.targetKeyspace, scc.targetName, scc.targetSignature); break; case DROPPED:
logger.info(String.format("Asked to rebuild %s %s.%s but I don't know keyspace %s", targetType, targetKeyspace, targetName, targetKeyspace)); metadata.cluster.submitSchemaRefresh(null, null, null, null); } else { switch (targetType) {
logger.info(String.format("Asked to rebuild %s %s.%s but I don't know keyspace %s", targetType, targetKeyspace, targetName, targetKeyspace)); metadata.cluster.submitSchemaRefresh(null, null, null, null); } else { switch (targetType) {
cluster.submitSchemaRefresh(null, null); return;
logger.info(String.format("Asked to rebuild %s %s.%s but I don't know keyspace %s", targetType, targetKeyspace, targetName, targetKeyspace)); metadata.cluster.submitSchemaRefresh(null, null, null, null); } else { switch (targetType) {
schemaReady = submitSchemaRefresh(targetType, targetKeyspace, targetName, targetSignature);
schemaReady = submitSchemaRefresh(targetType, targetKeyspace, targetName, targetSignature);
schemaReady = submitSchemaRefresh(targetType, targetKeyspace, targetName, targetSignature);
private static void waitFor( String node, Cluster cluster, int timeoutSeconds, boolean waitForDown) { if (waitForDown) logger.debug("Waiting for node to leave: {}", node); else logger.debug("Waiting for upcoming node: {}", node); // In the case where the we've killed the last node in the cluster, if we haven't // tried doing an actual query, the driver won't realize that last node is dead until // keep alive kicks in, but that's a fairly long time. So we cheat and trigger a force // the detection by forcing a request. if (waitForDown) Futures.getUnchecked(cluster.manager.submitSchemaRefresh(null, null, null, null)); if (waitForDown) { check() .every(1, SECONDS) .before(timeoutSeconds, SECONDS) .that(new HostIsDown(cluster, node)) .becomesTrue(); } else { check() .every(1, SECONDS) .before(timeoutSeconds, SECONDS) .that(new HostIsUp(cluster, node)) .becomesTrue(); } }
@Override public void run() { try { // Before refreshing the schema, wait for schema agreement so // that querying a table just after having created it don't fail. if (!ControlConnection.waitForSchemaAgreement(connection, Cluster.Manager.this)) logger.warn("No schema agreement from live replicas after {} ms. The schema may not be up to date on some nodes.", ControlConnection.MAX_SCHEMA_AGREEMENT_WAIT_MS); ControlConnection.refreshSchema(connection, keyspace, table, Cluster.Manager.this); } catch (Exception e) { logger.error("Error during schema refresh ({}). The schema from Cluster.getMetadata() might appear stale. Asynchronously submitting job to fix.", e.getMessage()); submitSchemaRefresh(keyspace, table); } finally { // Always sets the result future.setResult(rs); } } });
private static void waitFor( String node, Cluster cluster, int timeoutSeconds, boolean waitForDown) { if (waitForDown) logger.debug("Waiting for node to leave: {}", node); else logger.debug("Waiting for upcoming node: {}", node); // In the case where the we've killed the last node in the cluster, if we haven't // tried doing an actual query, the driver won't realize that last node is dead until // keep alive kicks in, but that's a fairly long time. So we cheat and trigger a force // the detection by forcing a request. if (waitForDown) Futures.getUnchecked(cluster.manager.submitSchemaRefresh(null, null, null, null)); if (waitForDown) { check() .every(1, SECONDS) .before(timeoutSeconds, SECONDS) .that(new HostIsDown(cluster, node)) .becomesTrue(); } else { check() .every(1, SECONDS) .before(timeoutSeconds, SECONDS) .that(new HostIsUp(cluster, node)) .becomesTrue(); } }