private void executeAndVerifyProposals(ZkUtils zkUtils, Collection<ExecutionProposal> proposalsToExecute, Collection<ExecutionProposal> proposalsToCheck) { KafkaCruiseControlConfig configs = new KafkaCruiseControlConfig(getExecutorProperties()); Executor executor = new Executor(configs, new SystemTime(), new MetricRegistry(), 86400000L, 43200000L); executor.setExecutionMode(false); executor.executeProposals(proposalsToExecute, Collections.emptySet(), null, EasyMock.mock(LoadMonitor.class), null, null, null); Map<TopicPartition, Integer> replicationFactors = new HashMap<>(); for (ExecutionProposal proposal : proposalsToCheck) { int replicationFactor = zkUtils.getReplicasForPartition(proposal.topic(), proposal.partitionId()).size(); replicationFactors.put(new TopicPartition(proposal.topic(), proposal.partitionId()), replicationFactor); } waitUntilExecutionFinishes(executor); for (ExecutionProposal proposal : proposalsToCheck) { TopicPartition tp = new TopicPartition(proposal.topic(), proposal.partitionId()); int expectedReplicationFactor = replicationFactors.get(tp); assertEquals("Replication factor for partition " + tp + " should be " + expectedReplicationFactor, expectedReplicationFactor, zkUtils.getReplicasForPartition(tp.topic(), tp.partition()).size()); if (proposal.hasReplicaAction()) { for (int brokerId : proposal.newReplicas()) { assertTrue("The partition should have moved for " + tp, zkUtils.getReplicasForPartition(tp.topic(), tp.partition()).contains(brokerId)); } } assertEquals("The leader should have moved for " + tp, proposal.newLeader(), zkUtils.getLeaderForPartition(tp.topic(), tp.partition()).get()); } }
@Test public void testFrequentItems() { Dataset<Row> df = spark.table("testData2"); String[] cols = {"a"}; Dataset<Row> results = df.stat().freqItems(cols, 0.2); Assert.assertTrue(results.collectAsList().get(0).getSeq(0).contains(1)); }
@Test public void testFrequentItems() { Dataset<Row> df = spark.table("testData2"); String[] cols = {"a"}; Dataset<Row> results = df.stat().freqItems(cols, 0.2); Assert.assertTrue(results.collectAsList().get(0).getSeq(0).contains(1)); }
@Test public void testFrequentItems() { Dataset<Row> df = spark.table("testData2"); String[] cols = {"a"}; Dataset<Row> results = df.stat().freqItems(cols, 0.2); Assert.assertTrue(results.collectAsList().get(0).getSeq(0).contains(1)); }
/** * Filter out properties from the original config that are not supported by Kafka. * For example, we allow users to set replication.factor as a property of the streams * and then parse it out so we can pass it separately as Kafka requires. But Kafka * will also throw if replication.factor is passed as a property on a new topic. * * @param originalConfig The original config to filter * @return The filtered config */ private static Map<String, String> filterUnsupportedProperties(Map<String, String> originalConfig) { Map<String, String> filteredConfig = new HashMap<>(); for (Map.Entry<String, String> entry: originalConfig.entrySet()) { // Kafka requires replication factor, but not as a property, so we have to filter it out. if (!KafkaConfig.TOPIC_REPLICATION_FACTOR().equals(entry.getKey())) { if (LogConfig.configNames().contains(entry.getKey())) { filteredConfig.put(entry.getKey(), entry.getValue()); } else { LOG.warn("Property '{}' is not a valid Kafka topic config. It will be ignored.", entry.getKey()); } } } return filteredConfig; }
/** * Filter out properties from the original config that are not supported by Kafka. * For example, we allow users to set replication.factor as a property of the streams * and then parse it out so we can pass it separately as Kafka requires. But Kafka * will also throw if replication.factor is passed as a property on a new topic. * * @param originalConfig The original config to filter * @return The filtered config */ private static Map<String, String> filterUnsupportedProperties(Map<String, String> originalConfig) { Map<String, String> filteredConfig = new HashMap<>(); for (Map.Entry<String, String> entry: originalConfig.entrySet()) { // Kafka requires replication factor, but not as a property, so we have to filter it out. if (!KafkaConfig.TOPIC_REPLICATION_FACTOR().equals(entry.getKey())) { if (LogConfig.configNames().contains(entry.getKey())) { filteredConfig.put(entry.getKey(), entry.getValue()); } else { LOG.warn("Property '{}' is not a valid Kafka topic config. It will be ignored.", entry.getKey()); } } } return filteredConfig; }
/** * Filter out properties from the original config that are not supported by Kafka. * For example, we allow users to set replication.factor as a property of the streams * and then parse it out so we can pass it separately as Kafka requires. But Kafka * will also throw if replication.factor is passed as a property on a new topic. * * @param originalConfig The original config to filter * @return The filtered config */ private static Map<String, String> filterUnsupportedProperties(Map<String, String> originalConfig) { Map<String, String> filteredConfig = new HashMap<>(); for (Map.Entry<String, String> entry: originalConfig.entrySet()) { // Kafka requires replication factor, but not as a property, so we have to filter it out. if (!KafkaConfig.TOPIC_REPLICATION_FACTOR().equals(entry.getKey())) { if (LogConfig.configNames().contains(entry.getKey())) { filteredConfig.put(entry.getKey(), entry.getValue()); } else { LOG.warn("Property '{}' is not a valid Kafka topic config. It will be ignored.", entry.getKey()); } } } return filteredConfig; }
private void convert(Sort sort, Production prod) { convert(sort, prod.klabel().isDefined() && prod.klabel().get().params().contains(sort)); }
private static void checkCircularModuleImports(Module mainModule, scala.collection.Seq<Module> visitedModules) { if (visitedModules.contains(mainModule)) { String msg = "Found circularity in module imports: "; for (Module m : mutable(visitedModules)) { // JavaConversions.seqAsJavaList(visitedModules) msg += m.getName() + " < "; } msg += visitedModules.head().getName(); throw KEMException.compilerError(msg); } }