Tabnine Logo
ParallelWrapper$Builder
Code IndexAdd Tabnine to your IDE (free)

How to use
ParallelWrapper$Builder
in
org.deeplearning4j.parallelism

Best Java code snippets using org.deeplearning4j.parallelism.ParallelWrapper$Builder (Showing top 13 results out of 315)

origin: deeplearning4j/dl4j-examples

ParallelWrapper wrapper = new ParallelWrapper.Builder(vgg16)
  .prefetchBuffer(24)
  .workers(Nd4j.getAffinityManager().getNumberOfDevices())
  .trainingMode(ParallelWrapper.TrainingMode.SHARED_GRADIENTS)
  .build();
origin: deeplearning4j/dl4j-examples

ParallelWrapper wrapper = new ParallelWrapper.Builder(model)
  .prefetchBuffer(24)
  .workers(4)
  .averagingFrequency(3)
  .reportScoreAfterAveraging(true)
  .build();
origin: org.deeplearning4j/deeplearning4j-parallel-wrapper

ParallelWrapper wrapper = new ParallelWrapper.Builder(model)
        .prefetchBuffer(prefetchSize)
        .workers(workers)
        .averagingFrequency(averagingFrequency).averageUpdaters(averageUpdaters)
        .reportScoreAfterAveraging(reportScore)
        .build();
origin: org.deeplearning4j/deeplearning4j-parallel-wrapper_2.11

ParallelWrapper wrapper = new ParallelWrapper.Builder(model)
        .prefetchBuffer(prefetchSize)
        .workers(workers)
        .averagingFrequency(averagingFrequency).averageUpdaters(averageUpdaters)
        .reportScoreAfterAveraging(reportScore)
        .build();
origin: org.deeplearning4j/deeplearning4j-parallel-wrapper_2.11

public EarlyStoppingParallelTrainer(EarlyStoppingConfiguration<T> earlyStoppingConfiguration, T model,
        DataSetIterator train, MultiDataSetIterator trainMulti, EarlyStoppingListener<T> listener,
        int workers, int prefetchBuffer, int averagingFrequency, boolean reportScoreAfterAveraging,
        boolean useLegacyAveraging) {
  this.esConfig = earlyStoppingConfiguration;
  this.train = train;
  this.trainMulti = trainMulti;
  this.iterator = (train != null ? train : trainMulti);
  this.listener = listener;
  this.model = model;
  // adjust UI listeners
  AveragingIterationListener trainerListener = new AveragingIterationListener(this);
  if (model instanceof MultiLayerNetwork) {
    Collection<IterationListener> listeners = ((MultiLayerNetwork) model).getListeners();
    Collection<IterationListener> newListeners = new LinkedList<>(listeners);
    newListeners.add(trainerListener);
    model.setListeners(newListeners);
  } else if (model instanceof ComputationGraph) {
    Collection<IterationListener> listeners = ((ComputationGraph) model).getListeners();
    Collection<IterationListener> newListeners = new LinkedList<>(listeners);
    newListeners.add(trainerListener);
    model.setListeners(newListeners);
  }
  this.wrapper = new ParallelWrapper.Builder<>(model).workers(workers).prefetchBuffer(prefetchBuffer)
          .averagingFrequency(averagingFrequency)
          //.useLegacyAveraging(useLegacyAveraging)
          .reportScoreAfterAveraging(reportScoreAfterAveraging).build();
}
origin: org.deeplearning4j/deeplearning4j-parallel-wrapper

public EarlyStoppingParallelTrainer(EarlyStoppingConfiguration<T> earlyStoppingConfiguration, T model,
        DataSetIterator train, MultiDataSetIterator trainMulti, EarlyStoppingListener<T> listener,
        int workers, int prefetchBuffer, int averagingFrequency, boolean reportScoreAfterAveraging,
        boolean useLegacyAveraging) {
  this.esConfig = earlyStoppingConfiguration;
  this.train = train;
  this.trainMulti = trainMulti;
  this.iterator = (train != null ? train : trainMulti);
  this.listener = listener;
  this.model = model;
  // adjust UI listeners
  AveragingIterationListener trainerListener = new AveragingIterationListener(this);
  if (model instanceof MultiLayerNetwork) {
    Collection<IterationListener> listeners = ((MultiLayerNetwork) model).getListeners();
    Collection<IterationListener> newListeners = new LinkedList<>(listeners);
    newListeners.add(trainerListener);
    model.setListeners(newListeners);
  } else if (model instanceof ComputationGraph) {
    Collection<IterationListener> listeners = ((ComputationGraph) model).getListeners();
    Collection<IterationListener> newListeners = new LinkedList<>(listeners);
    newListeners.add(trainerListener);
    model.setListeners(newListeners);
  }
  this.wrapper = new ParallelWrapper.Builder<>(model).workers(workers).prefetchBuffer(prefetchBuffer)
          .averagingFrequency(averagingFrequency)
          //.useLegacyAveraging(useLegacyAveraging)
          .reportScoreAfterAveraging(reportScoreAfterAveraging).build();
}
origin: CampagneLaboratory/variationanalysis

public ParallelTrainerOnGPU(ComputationGraph graph, int miniBatchSize, int totalExamplesPerIterator) {
  String numWorkersString = System.getProperty("framework.parallelWrapper.numWorkers");
  int numWorkers = numWorkersString != null ? Integer.parseInt(numWorkersString) : 4;
  String prefetchBufferString = System.getProperty("framework.parallelWrapper.prefetchBuffer");
  int prefetchBuffer = prefetchBufferString != null ? Integer.parseInt(prefetchBufferString) : 12 * numWorkers;
  String averagingFrequencyString = System.getProperty("framework.parallelWrapper.averagingFrequency");
  int averagingFrequency = averagingFrequencyString != null ? Integer.parseInt(averagingFrequencyString) : 3;
  wrapper = new ParallelWrapper.Builder<>(graph)
      .prefetchBuffer(prefetchBuffer)
      .workers(numWorkers)
      .averagingFrequency(averagingFrequency)
      .reportScoreAfterAveraging(false)
      // .useLegacyAveraging(true)
      .build();
  wrapper.setListeners(perListener);
  this.numExamplesPerIterator = totalExamplesPerIterator;
  this.miniBatchSize = miniBatchSize;
}
origin: deeplearning4j/dl4j-examples

ParallelWrapper wrapper = new ParallelWrapper.Builder(model)
  .prefetchBuffer(24)
  .workers(2)
  .workspaceMode(WorkspaceMode.SINGLE)
  .trainerFactory(new SymmetricTrainerContext())
  .trainingMode(ParallelWrapper.TrainingMode.CUSTOM)
  .gradientsAccumulator(new EncodedGradientsAccumulator(2, 1e-3))
  .build();
origin: deeplearning4j/dl4j-examples

ParallelWrapper wrapper = new ParallelWrapper.Builder(net)
  .prefetchBuffer(24)
  .workers(8)
  .averagingFrequency(3)
  .reportScoreAfterAveraging(true)
  .build();
origin: deeplearning4j/dl4j-examples

ParallelWrapper wrapper = new ParallelWrapper.Builder(model)
  .prefetchBuffer(24)
  .workers(2)
  .averagingFrequency(3)
  .reportScoreAfterAveraging(true)
  .build();
origin: deeplearning4j/dl4j-examples

ParallelWrapper wrapper = new ParallelWrapper.Builder(net)
  .prefetchBuffer(24)
  .workers(4)
  .averagingFrequency(3)
  .reportScoreAfterAveraging(true)
  .build();
origin: deeplearning4j/dl4j-examples

log.info(transferLearningHelper.unfrozenGraph().summary());
ParallelWrapper wrapper = new ParallelWrapper.Builder(transferLearningHelper.unfrozenGraph())
  .prefetchBuffer(24)
  .workers(Nd4j.getAffinityManager().getNumberOfDevices())
  .averagingFrequency(3)
  .reportScoreAfterAveraging(true)
  .build();
origin: deeplearning4j/dl4j-examples

DataSetIterator test = new ExistingMiniBatchDataSetIterator(new File(TEST_PATH));
ParallelWrapper pw = new ParallelWrapper.Builder<>(net)
  .prefetchBuffer(16 * Nd4j.getAffinityManager().getNumberOfDevices())
  .reportScoreAfterAveraging(true)
  .averagingFrequency(10)
  .workers(Nd4j.getAffinityManager().getNumberOfDevices())
  .build();
org.deeplearning4j.parallelismParallelWrapper$Builder

Most used methods

  • <init>
    Build ParallelWrapper for MultiLayerNetwork
  • averagingFrequency
    Model averaging frequency.
  • build
    This method returns ParallelWrapper instance
  • prefetchBuffer
    Size of prefetch buffer that will be used for background data prefetching. Usually it's better to ke
  • reportScoreAfterAveraging
    This method enables/disables averaged model score reporting
  • workers
    This method allows to configure number of workers that'll be used for parallel training
  • averageUpdaters
    This method enables/disables updaters averaging. Default value: TRUE PLEASE NOTE: This method is sui
  • gradientsAccumulator
    This method allows you to specify GradientsAccumulator instance to be used in this ParallelWrapper i
  • trainerFactory
    Specify a TrainerContextfor the given ParallelWrapperinstance. Defaults to DefaultTrainerContextothe
  • trainingMode
  • workspaceMode
  • workspaceMode

Popular in Java

  • Finding current android device location
  • getSupportFragmentManager (FragmentActivity)
  • getSharedPreferences (Context)
  • notifyDataSetChanged (ArrayAdapter)
  • InputStreamReader (java.io)
    A class for turning a byte stream into a character stream. Data read from the source input stream is
  • System (java.lang)
    Provides access to system-related information and resources including standard input and output. Ena
  • MalformedURLException (java.net)
    This exception is thrown when a program attempts to create an URL from an incorrect specification.
  • SortedSet (java.util)
    SortedSet is a Set which iterates over its elements in a sorted order. The order is determined eithe
  • Handler (java.util.logging)
    A Handler object accepts a logging request and exports the desired messages to a target, for example
  • IsNull (org.hamcrest.core)
    Is the value null?
  • CodeWhisperer alternatives
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now