Member since
10-09-2015
76
Posts
33
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4924 | 03-09-2017 09:08 PM | |
5261 | 02-23-2017 08:01 AM | |
1696 | 02-21-2017 03:04 AM | |
2049 | 02-16-2017 08:00 AM | |
1080 | 01-26-2017 06:32 PM |
02-24-2022
01:04 PM
@HaiderNaveed As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. Thanks!
... View more
01-12-2017
04:43 AM
Thanks @Terry Stebbens and @Ian Roberts, I have changed to yarn.scheduler.capacity.maximum-am-resource-percent=0.4 and its working fine.
... View more
02-08-2017
12:20 PM
1 Kudo
I now have a working livy running, at least sc.version works After trying everything I could find with livy 0.2.0 (the version in 2.5.0) I decided to give 0.3.0 a try. I believe that the problem is caused by a bug in spark 1.6.2 when connecting to the metadata store. After compiling livy 0.3.0 with hadoop 2.7.3 and spark 2.0.0, and installing it beside 0.2.0 I had problems creating credentials for the HTTP principal. I solved that by using the hadoop jars from livy 0.2.0 instead of those from the build.
... View more
01-03-2017
09:41 PM
Alicia, please see my answer above on Oct 24. If you are running Spark on YARN you will have to go through the YARN RM UI to get to the Spark UI for a running job. Link for YARN UI is available from Ambari YARN service. For a completed job, you will need to go through Spark History Server. Link for Spark history server is available from the Ambari Spark service.
... View more
12-22-2016
10:16 PM
Without the full exception stack trace its difficult to know what happened. If you are instantiating hive then you may need to add hive-site.xml and the data-nucleus jars to the job. e.g. like --jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar --files /usr/hdp/current/spark-client/conf/hive-site.xml
... View more
12-20-2016
08:51 PM
'%1s' is a java format specifier that I use to dynamically add the HBase table name in the catalog for testing purpose. The other issue (multiple columns) you mentioned is not applicable in my case since I don't create a new table using the option 'HBaseTableCatalog.newTable -> “5”'. I use an existing pre split table. This is the code from 'Utils$.toBytes()'. If one of the fields in the Data Frame is Null, the method toBytes() gets the first argument 'input' as Null and Null is not instance of anything. So, it eventually goes to 'label323' and throws that error. So, the only workaround at this stage is to remove the null fields from the data frame or to populate them with something which is not always feasible. public byte[] toBytes(Object input, Field field) { if (field.schema().isDefined()); Object record; Object localObject1 = input; Object localObject2; if (localObject1 instanceof Boolean) { boolean bool = BoxesRunTime.unboxToBoolean(localObject1); localObject2 = Bytes.toBytes(bool); } else if (localObject1 instanceof Byte) { int i = BoxesRunTime.unboxToByte(localObject1); localObject2 = new byte[] { i }; } else if (localObject1 instanceof byte[]) { byte[] arrayOfByte = (byte[])localObject1; localObject2 = arrayOfByte; } else if (localObject1 instanceof Double) { double d = BoxesRunTime.unboxToDouble(localObject1); localObject2 = Bytes.toBytes(d); } else if (localObject1 instanceof Float) { float f = BoxesRunTime.unboxToFloat(localObject1); localObject2 = Bytes.toBytes(f); } else if (localObject1 instanceof Integer) { int j = BoxesRunTime.unboxToInt(localObject1); localObject2 = Bytes.toBytes(j); } else if (localObject1 instanceof Long) { long l = BoxesRunTime.unboxToLong(localObject1); localObject2 = Bytes.toBytes(l); } else if (localObject1 instanceof Short) { short s = BoxesRunTime.unboxToShort(localObject1); localObject2 = Bytes.toBytes(s); } else if (localObject1 instanceof UTF8String) { UTF8String localUTF8String = (UTF8String)localObject1; localObject2 = localUTF8String.getBytes(); } else { if (!(localObject1 instanceof String)) break label323; String str = (String)localObject1; localObject2 = Bytes.toBytes(str); } return ((record = field.catalystToAvro().apply(input)) ? AvroSedes..MODULE$.serialize(record, (Schema)field.schema().get()) : (field.sedes().isDefined()) ? ((Sedes)field.sedes().get()).serialize(input) : localObject2); label323: throw new Exception(new StringContext(Predef..MODULE$.wrapRefArray((Object[])new String[] { "unsupported data type ", "" })).s(Predef..MODULE$.genericWrapArray(new Object[] { field.dt() }))); }
... View more