Member since
10-24-2015
207
Posts
18
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4453 | 03-04-2018 08:18 PM | |
4337 | 09-19-2017 04:01 PM | |
1809 | 01-28-2017 10:31 PM | |
977 | 12-08-2016 03:04 PM |
05-25-2017
09:49 PM
Hi all,
I had some issues with a node and decomissioned it and deleted it. But after I cleaned it up and commissioned back again, it is still showing in the yarn.exclude file. I came to know about this when Nodemanager is shutting down failing to register with Resource manager. I would like to know if i can just go to the exclude file on RM's (HA) and delete the hostname ? I dont need to restart any master services right ?? like YARN... please advise. Thanks.
... View more
Labels:
- Labels:
-
Apache YARN
05-18-2017
09:07 PM
Hi @Sonu Sahi Can you be more brief on connectivty between hadoop cluster and cassandra cluster especially when they are in different subnets. What ports and nodes need access? Thanks, Padma.
... View more
05-16-2017
09:27 PM
Hi @Sonu Sahi Thanks for your reply. What about sqoop import in hadoop ? to import cassandra tables into hdfs from hadoop client.
... View more
05-15-2017
06:15 PM
Hi, Can I get some xpert advise on the best possible ways to import cassandra tables into hadoop cluster ? and which ports should be open in hadoop for the connection... ? Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hadoop
05-11-2017
07:07 PM
Hi, Somehow or somebody unmounted a data disk from one of the data nodes and mounted it back again, so the data is still in there . So when such thing happens, will the data residing in that disk is corrupted and i have to clean that disk and mount it back ? or what do i do? What exactly is the procuedure I follow to get this disk back into the datanode considering this a production cluster. Thanks...
... View more
Labels:
- Labels:
-
Apache Hadoop
03-28-2017
05:33 PM
Hi, Does Hortonworks recommend hive queries which where previously running on tez to be changed to run on spark engine? What are the draawbacks? How beneficial it is, if it is? I just tried to simple select query on a table after setting I am gettibg following error: java.lang.NoClassDefFoundError: org/apache/spark/SparkConf
at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:160)
at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:89)
at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:65)
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:55)
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:117)
at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:112)
at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:101)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1745)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1491)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1289)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1156)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1146)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:217)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:169)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:380)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:740)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:685)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 26 more
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. org/apache/spark/SparkConf
... View more
Labels:
- Labels:
-
Apache Spark
03-28-2017
03:43 PM
1 Kudo
Do you guys recommened Hive LLAP for production. We have a Impala running job to be replaced in HDP cluster, I was thinking LLAP but do you think i can use it for proudction? I have HDP 2.5.3
... View more
Labels:
03-28-2017
03:40 PM
Where do i install Hue? One of the masters? or edge node? what if my edge node doesn't have enough place /usr/hdp and it only has 2.3GB space left? Can I install it on one of the datanodes>? I am talking about production environment here.
... View more
Labels:
- Labels:
-
Cloudera Hue
02-07-2017
10:35 PM
2 Kudos
Hi, I have been seeing this warning a lot in the yarn logs for every job that is run different users.... any idea about this? 2017-02-07 16:47:01,477 WARN resourcemanager.RMAuditLogger (RMAuditLogger.java:logFailure(267)) - USER=user IP=x.x.x.x OPERATION=AM Released Container TARGET=Scheduler RESULT=FAILURE DESCRIPTION=Trying to release container not owned by app or with invalid id. PERMISSIONS=Unauthorized access or invalid container APPID=application_1485795502013_2891 CONTAINERID=container_e31_1485795502013_2891_01_000313 HDP 2.5.3
... View more
Labels:
- Labels:
-
Apache YARN