Tabnine Logo
ParallelWrapper$Builder.reportScoreAfterAveraging
Code IndexAdd Tabnine to your IDE (free)

How to use
reportScoreAfterAveraging
method
in
org.deeplearning4j.parallelism.ParallelWrapper$Builder

Best Java code snippets using org.deeplearning4j.parallelism.ParallelWrapper$Builder.reportScoreAfterAveraging (Showing top 11 results out of 315)

origin: deeplearning4j/dl4j-examples

.reportScoreAfterAveraging(true)
.averagingFrequency(10)
.workers(Nd4j.getAffinityManager().getNumberOfDevices())
origin: deeplearning4j/dl4j-examples

.reportScoreAfterAveraging(true)
origin: deeplearning4j/dl4j-examples

.reportScoreAfterAveraging(true)
origin: org.deeplearning4j/deeplearning4j-parallel-wrapper_2.11

.reportScoreAfterAveraging(reportScore)
origin: org.deeplearning4j/deeplearning4j-parallel-wrapper

.reportScoreAfterAveraging(reportScore)
origin: org.deeplearning4j/deeplearning4j-parallel-wrapper_2.11

public EarlyStoppingParallelTrainer(EarlyStoppingConfiguration<T> earlyStoppingConfiguration, T model,
        DataSetIterator train, MultiDataSetIterator trainMulti, EarlyStoppingListener<T> listener,
        int workers, int prefetchBuffer, int averagingFrequency, boolean reportScoreAfterAveraging,
        boolean useLegacyAveraging) {
  this.esConfig = earlyStoppingConfiguration;
  this.train = train;
  this.trainMulti = trainMulti;
  this.iterator = (train != null ? train : trainMulti);
  this.listener = listener;
  this.model = model;
  // adjust UI listeners
  AveragingIterationListener trainerListener = new AveragingIterationListener(this);
  if (model instanceof MultiLayerNetwork) {
    Collection<IterationListener> listeners = ((MultiLayerNetwork) model).getListeners();
    Collection<IterationListener> newListeners = new LinkedList<>(listeners);
    newListeners.add(trainerListener);
    model.setListeners(newListeners);
  } else if (model instanceof ComputationGraph) {
    Collection<IterationListener> listeners = ((ComputationGraph) model).getListeners();
    Collection<IterationListener> newListeners = new LinkedList<>(listeners);
    newListeners.add(trainerListener);
    model.setListeners(newListeners);
  }
  this.wrapper = new ParallelWrapper.Builder<>(model).workers(workers).prefetchBuffer(prefetchBuffer)
          .averagingFrequency(averagingFrequency)
          //.useLegacyAveraging(useLegacyAveraging)
          .reportScoreAfterAveraging(reportScoreAfterAveraging).build();
}
origin: org.deeplearning4j/deeplearning4j-parallel-wrapper

public EarlyStoppingParallelTrainer(EarlyStoppingConfiguration<T> earlyStoppingConfiguration, T model,
        DataSetIterator train, MultiDataSetIterator trainMulti, EarlyStoppingListener<T> listener,
        int workers, int prefetchBuffer, int averagingFrequency, boolean reportScoreAfterAveraging,
        boolean useLegacyAveraging) {
  this.esConfig = earlyStoppingConfiguration;
  this.train = train;
  this.trainMulti = trainMulti;
  this.iterator = (train != null ? train : trainMulti);
  this.listener = listener;
  this.model = model;
  // adjust UI listeners
  AveragingIterationListener trainerListener = new AveragingIterationListener(this);
  if (model instanceof MultiLayerNetwork) {
    Collection<IterationListener> listeners = ((MultiLayerNetwork) model).getListeners();
    Collection<IterationListener> newListeners = new LinkedList<>(listeners);
    newListeners.add(trainerListener);
    model.setListeners(newListeners);
  } else if (model instanceof ComputationGraph) {
    Collection<IterationListener> listeners = ((ComputationGraph) model).getListeners();
    Collection<IterationListener> newListeners = new LinkedList<>(listeners);
    newListeners.add(trainerListener);
    model.setListeners(newListeners);
  }
  this.wrapper = new ParallelWrapper.Builder<>(model).workers(workers).prefetchBuffer(prefetchBuffer)
          .averagingFrequency(averagingFrequency)
          //.useLegacyAveraging(useLegacyAveraging)
          .reportScoreAfterAveraging(reportScoreAfterAveraging).build();
}
origin: CampagneLaboratory/variationanalysis

public ParallelTrainerOnGPU(ComputationGraph graph, int miniBatchSize, int totalExamplesPerIterator) {
  String numWorkersString = System.getProperty("framework.parallelWrapper.numWorkers");
  int numWorkers = numWorkersString != null ? Integer.parseInt(numWorkersString) : 4;
  String prefetchBufferString = System.getProperty("framework.parallelWrapper.prefetchBuffer");
  int prefetchBuffer = prefetchBufferString != null ? Integer.parseInt(prefetchBufferString) : 12 * numWorkers;
  String averagingFrequencyString = System.getProperty("framework.parallelWrapper.averagingFrequency");
  int averagingFrequency = averagingFrequencyString != null ? Integer.parseInt(averagingFrequencyString) : 3;
  wrapper = new ParallelWrapper.Builder<>(graph)
      .prefetchBuffer(prefetchBuffer)
      .workers(numWorkers)
      .averagingFrequency(averagingFrequency)
      .reportScoreAfterAveraging(false)
      // .useLegacyAveraging(true)
      .build();
  wrapper.setListeners(perListener);
  this.numExamplesPerIterator = totalExamplesPerIterator;
  this.miniBatchSize = miniBatchSize;
}
origin: deeplearning4j/dl4j-examples

.reportScoreAfterAveraging(true)
origin: deeplearning4j/dl4j-examples

.reportScoreAfterAveraging(true)
origin: deeplearning4j/dl4j-examples

.reportScoreAfterAveraging(true)
org.deeplearning4j.parallelismParallelWrapper$BuilderreportScoreAfterAveraging

Javadoc

This method enables/disables averaged model score reporting

Popular methods of ParallelWrapper$Builder

  • <init>
    Build ParallelWrapper for MultiLayerNetwork
  • averagingFrequency
    Model averaging frequency.
  • build
    This method returns ParallelWrapper instance
  • prefetchBuffer
    Size of prefetch buffer that will be used for background data prefetching. Usually it's better to ke
  • workers
    This method allows to configure number of workers that'll be used for parallel training
  • averageUpdaters
    This method enables/disables updaters averaging. Default value: TRUE PLEASE NOTE: This method is sui
  • gradientsAccumulator
    This method allows you to specify GradientsAccumulator instance to be used in this ParallelWrapper i
  • trainerFactory
    Specify a TrainerContextfor the given ParallelWrapperinstance. Defaults to DefaultTrainerContextothe
  • trainingMode
  • workspaceMode

Popular in Java

  • Reactive rest calls using spring rest template
  • setContentView (Activity)
  • addToBackStack (FragmentTransaction)
  • getOriginalFilename (MultipartFile)
    Return the original filename in the client's filesystem.This may contain path information depending
  • InputStream (java.io)
    A readable source of bytes.Most clients will use input streams that read data from the file system (
  • InputStreamReader (java.io)
    A class for turning a byte stream into a character stream. Data read from the source input stream is
  • Date (java.sql)
    A class which can consume and produce dates in SQL Date format. Dates are represented in SQL as yyyy
  • SortedSet (java.util)
    SortedSet is a Set which iterates over its elements in a sorted order. The order is determined eithe
  • TimeUnit (java.util.concurrent)
    A TimeUnit represents time durations at a given unit of granularity and provides utility methods to
  • ZipFile (java.util.zip)
    This class provides random read access to a zip file. You pay more to read the zip file's central di
  • Top 12 Jupyter Notebook extensions
Tabnine Logo
  • Products

    Search for Java codeSearch for JavaScript code
  • IDE Plugins

    IntelliJ IDEAWebStormVisual StudioAndroid StudioEclipseVisual Studio CodePyCharmSublime TextPhpStormVimGoLandRubyMineEmacsJupyter NotebookJupyter LabRiderDataGripAppCode
  • Company

    About UsContact UsCareers
  • Resources

    FAQBlogTabnine AcademyTerms of usePrivacy policyJava Code IndexJavascript Code Index
Get Tabnine for your IDE now