Member since
01-04-2016
409
Posts
313
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5699 | 01-16-2018 07:00 AM | |
1883 | 09-13-2017 06:17 PM | |
3744 | 09-13-2017 05:58 AM | |
2380 | 08-28-2017 07:16 AM | |
4153 | 05-11-2017 11:30 AM |
07-18-2016
03:14 PM
I got the error:-resource_management.core.exceptions.Fail: Execution of 'conf-select set-conf-dir --package hbase --stack-version 2.3.4.0-3485 --conf-version 0' returned 1. /usr/hdp/2.3.4.0-3485/hbase/conf does not exist but if i checked on server files are there ll /usr/hdp/2.3.4.0-3485/hbase/conf
total 44
-rw-r--r-- 1 hbase hadoop 2533 Jul 18 11:15 core-site.xml
-rw-r--r-- 1 hbase root 2357 Jul 18 11:15 hadoop-metrics2-hbase.properties
-rw-r--r-- 1 hbase root 2916 Jul 18 11:15 hbase-env.sh
-rw-r--r-- 1 hbase hadoop 401 Jul 18 11:15 hbase-policy.xml
-rw-r--r-- 1 hbase hadoop 5152 Jul 18 11:15 hbase-site.xml
-rw-r--r-- 1 hbase hadoop 6568 Jul 18 11:15 hdfs-site.xml
-rw-r--r-- 1 hbase hadoop 4235 Jul 18 11:15 log4j.properties
-rw-r--r-- 1 hbase root 24 Jul 18 11:15 regionservers
... View more
07-18-2016
02:42 PM
@mjohnson My problem is that ambari detect that the client installed, but when i am restarting the hbase client config it's getting failed. Below is the output of hdp-select. [hdfs@qacluster ~]$ /usr/bin/hdp-select
accumulo-client - None
accumulo-gc - None
accumulo-master - None
accumulo-monitor - None
accumulo-tablet - None
accumulo-tracer - None
atlas-server - None
falcon-client - None
falcon-server - None
flume-server - None
hadoop-client - 2.3.4.0-3485
hadoop-hdfs-datanode - 2.3.4.0-3485
hadoop-hdfs-journalnode - 2.3.4.0-3485
hadoop-hdfs-namenode - 2.3.4.0-3485
hadoop-hdfs-nfs3 - 2.3.4.0-3485
hadoop-hdfs-portmap - 2.3.4.0-3485
hadoop-hdfs-secondarynamenode - 2.3.4.0-3485
hadoop-httpfs - 2.3.4.0-3485
hadoop-mapreduce-historyserver - 2.3.4.0-3485
hadoop-yarn-nodemanager - 2.3.4.0-3485
hadoop-yarn-resourcemanager - 2.3.4.0-3485
hadoop-yarn-timelineserver - 2.3.4.0-3485
hbase-client - 2.3.4.0-3485
hbase-master - 2.3.4.0-3485
hbase-regionserver - 2.3.4.0-3485
hive-metastore - None
hive-server2 - None
hive-webhcat - None
kafka-broker - None
knox-server - None
mahout-client - None
oozie-client - None
oozie-server - None
phoenix-client - 2.3.4.0-3485
phoenix-server - 2.3.4.0-3485
ranger-admin - None
ranger-kms - None
ranger-usersync - None
slider-client - None
spark-client - None
spark-historyserver - None
spark-thriftserver - None
sqoop-client - None
sqoop-server - None
storm-client - None
storm-nimbus - None
storm-slider-client - None
storm-supervisor - None
zookeeper-client - 2.3.4.0-3485
zookeeper-server - 2.3.4.0-3485 And hdp-select versions
2.3.2.0-2950 2.3.4.0-3485
... View more
07-18-2016
02:38 PM
@mjohnson this is failed with exit1:- INFO 2016-07-18 09:51:22,789 PythonExecutor.py:114 - Command ['/usr/bin/python2.7',
u'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/hook.py',
u'INSTALL',
'/var/lib/ambari-agent/data/command-353.json',
u'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL',
'/var/lib/ambari-agent/data/structured-out-353.json',
'INFO',
'/var/lib/ambari-agent/data/tmp'] failed with exitcode=1
... View more
07-18-2016
01:39 PM
1 Kudo
I am getting error after upgrade the hdp version from 2.3.2.0-2950 to 2.3.4.0-3485. HBASE_CLIENT in invalid state. Invalid transition. Invalid event: HOST_SVCCOMP_OP_IN_PROGRESS at INSTALL_FAILED hdp. I tried
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-18-2016
01:13 PM
can anyone help me to tune the spark to run same job on 32 GB system. Because I my cluster was 32 GB with 3 node, I think 32 GB per node is enough and free memory was always 20 GB on every node.
... View more
07-18-2016
10:15 AM
1 Kudo
Issue is resolved after increasing physical ram of the machine. Now it is working fine. I was running the job on 32 GB ram node and I increased the it to 64 GB and ran same code 3-4 times.
... View more
07-18-2016
08:20 AM
@Arun A K This is the command :- We are reading csv files. java -cp .:spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar:commons-csv-1.1.jar:spark-csv_2.10-1.4.0.jar SparkMainPlain xyz
... View more
07-15-2016
01:16 PM
1 Kudo
We have 3 node cluster each node have 32 GB ram. But still System going in hung stat after running the job. Job is doing converting dataframe to csv using com.databricks.csv.
... View more
Labels:
- Labels:
-
Apache Spark
07-13-2016
01:06 PM
Yes spark job is failed. We are trying to coalesce the file. But getting the error.
... View more