Support Questions

Find answers, ask questions, and share your expertise

IllegalArgumentException: requirement failed: maxBins should be greater than max categories

avatar
Expert Contributor

CDH 5.2.0, Centos 6.4

 

The skeleton of decision_tree.scala is like

...
val raw_data = sqlContext.parquetFile("/path/to/raw/data/")

raw_data.registerTempTable("raw_data")

val raw_rdd = sqlContext.sql("select ... from raw_data where rec_type=3")

val filtered_rdd = raw_rdd.map{case Row(label: Integer, ...) => 
  LabeledPoint(label, Vector.dense(...)) }

val splits = filtered_rdd.randomSplit(Array(0.7, 0.3))
val (trainingData, testData) = (splits(0), splits(1))

val numClasses = 2
val categoricalFeaturesInfo = Map[Int, Int](0 -> 20, 1 -> 30) 
val impurity = "gini" 
val maxDepth = 12
val maxBins = 32

val model = DecisionTree.trainClassifier(trainingData, numClasses, 
  categoricalFeaturesInfo, impurity, maxDepth, maxBins)
...

When I invoke spark-shell with command

$ spark-shell --executor-memory 2g --driver-memory 2g -deprecation -i decision_tree.scala

 

The job fails with following error, even maxBins was set to 32

java.lang.IllegalArgumentException: requirement failed: maxBins (= 4) should be greater than max categories in categorical features (>= 20)
	at scala.Predef$.require(Predef.scala:233)
	at org.apache.spark.mllib.tree.impl.DecisionTreeMetadata$$anonfun$buildMetadata$2.apply(DecisionTreeMetadata.scala:91)
	at org.apache.spark.mllib.tree.impl.DecisionTreeMetadata$$anonfun$buildMetadata$2.apply(DecisionTreeMetadata.scala:90)
	at scala.collection.immutable.Map$Map4.foreach(Map.scala:181)
	at org.apache.spark.mllib.tree.impl.DecisionTreeMetadata$.buildMetadata(DecisionTreeMetadata.scala:90)
	at org.apache.spark.mllib.tree.DecisionTree.train(DecisionTree.scala:66)
	at org.apache.spark.mllib.tree.DecisionTree$.train(DecisionTree.scala:339)
	at org.apache.spark.mllib.tree.DecisionTree$.trainClassifier(DecisionTree.scala:368)
	at $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcVI$sp(<console>:124)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
	at $iwC$$iwC$$iwC$$iwC.<init>(<console>:22)
	at $iwC$$iwC$$iwC.<init>(<console>:160)
	at $iwC$$iwC.<init>(<console>:162)
	at $iwC.<init>(<console>:164)
	at <init>(<console>:166)
	at .<init>(<console>:170)
	at .<clinit>(<console>)
	at .<init>(<console>:7)
	at .<clinit>(<console>)
	at $print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:846)
	at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1119)
	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:672)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:703)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:667)
	at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:819)
	at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:864)
... (long chain of reallyInterpret$1 and interpretStartingWith)
	at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:776)
	at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:619)
	at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:627)
	at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:632)
	at org.apache.spark.repl.SparkILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:642)
	at org.apache.spark.repl.SparkILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:639)
	at scala.reflect.io.Streamable$Chars$class.applyReader(Streamable.scala:104)
	at scala.reflect.io.File.applyReader(File.scala:82)
	at org.apache.spark.repl.SparkILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SparkILoop.scala:639)
	at org.apache.spark.repl.SparkILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:639)
	at org.apache.spark.repl.SparkILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:639)
	at org.apache.spark.repl.SparkILoop.savingReplayStack(SparkILoop.scala:153)
	at org.apache.spark.repl.SparkILoop$$anonfun$interpretAllFrom$1.apply$mcV$sp(SparkILoop.scala:638)
	at org.apache.spark.repl.SparkILoop$$anonfun$interpretAllFrom$1.apply(SparkILoop.scala:638)
	at org.apache.spark.repl.SparkILoop$$anonfun$interpretAllFrom$1.apply(SparkILoop.scala:638)
	at org.apache.spark.repl.SparkILoop.savingReader(SparkILoop.scala:158)
	at org.apache.spark.repl.SparkILoop.interpretAllFrom(SparkILoop.scala:637)
	at org.apache.spark.repl.SparkILoop$$anonfun$loadCommand$1.apply(SparkILoop.scala:702)
	at org.apache.spark.repl.SparkILoop$$anonfun$loadCommand$1.apply(SparkILoop.scala:701)
	at org.apache.spark.repl.SparkILoop.withFile(SparkILoop.scala:695)
	at org.apache.spark.repl.SparkILoop.loadCommand(SparkILoop.scala:701)
	at org.apache.spark.repl.SparkILoop$$anonfun$standardCommands$7.apply(SparkILoop.scala:311)
	at org.apache.spark.repl.SparkILoop$$anonfun$standardCommands$7.apply(SparkILoop.scala:311)
	at scala.tools.nsc.interpreter.LoopCommands$LineCmd.apply(LoopCommands.scala:81)
	at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:771)
	at org.apache.spark.repl.SparkILoop$$anonfun$loadFiles$1.apply(SparkILoop.scala:872)
	at org.apache.spark.repl.SparkILoop$$anonfun$loadFiles$1.apply(SparkILoop.scala:870)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:870)
	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:957)
	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:907)
	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:907)
	at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:907)
	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1002)
	at org.apache.spark.repl.Main$.main(Main.scala:31)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:331)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

If the criteria (rec_type=3) was removed from raw_rdd, the job runs to completion. 

 

Any idea?

1 ACCEPTED SOLUTION

avatar
Master Collaborator

The problem is that you have very few input data points -- 4, I'm guessing. maxBins > size of input doesn't make sense, so it's capped at the size of the input. But then, it also can't be less than the number of values for any categorical feature, since that implies it doesn't have permission to try all possible values.

 

It's not obvious from the error (which is better in later versions than Spark 1.1 that you're using) but that's almost certainly the issue.

View solution in original post

2 REPLIES 2

avatar
Master Collaborator

The problem is that you have very few input data points -- 4, I'm guessing. maxBins > size of input doesn't make sense, so it's capped at the size of the input. But then, it also can't be less than the number of values for any categorical feature, since that implies it doesn't have permission to try all possible values.

 

It's not obvious from the error (which is better in later versions than Spark 1.1 that you're using) but that's almost certainly the issue.

avatar
Expert Contributor
That's it. Thanks.