Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Can't open hadoop-conf/hive-env.sh: No such file or directory. when deploying Client Configurations

avatar
Explorer

Hello all,

 

When running the Deploying Client Configuration in Cloudera Manager, I got the following error(s) for all services, like HDFS, YARN, HIVE, ... 

 

Using latest release CDH-5.16.1-1

 

I have checked the following path, but there is no files mentioned in error message.

 

/run/cloudera-scm-agent/process/ccdeploy_hadoop-conf_etchadoopconf.cloudera.hdfs_2512129932269251687/hadoop-conf/



+ [[ hadoop-conf/hive-env.sh == \h\i\v\e\-\s\i\t\e\.\x\m\l ]] + perl -pi -e 's#{{HIVE_HBASE_JAR}}#/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hive/lib/hive-hbase-handler-1.1.0-cdh5.16.1.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-client.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-common.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-hadoop-compat.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-hadoop2-compat.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-protocol.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-server.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/lib/htrace-core-3.2.0-incubating.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/lib/htrace-core.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/lib/htrace-core4-4.0.1-incubating.jar#g' /run/cloudera-scm-agent/process/ccdeploy_hadoop-conf_etchadoopconf.cloudera.hdfs_2512129932269251687/hadoop-conf/hive-env.sh Can't open /run/cloudera-scm-agent/process/ccdeploy_hadoop-conf_etchadoopconf.cloudera.hdfs_2512129932269251687/hadoop-conf/hive-env.sh: No such file or directory. ++ dirname /etc/hadoop/conf.cloudera.hdfs + ROOT_DIR_NAME=/etc/hadoop + '[' '!' -e /etc/hadoop ']' + sudo mkdir -p /etc/hadoop We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. sudo: no tty present and no askpass program specified

 I have checked the following path as well:

 

/var/lib/alternatives/

 

there is no config files as per https://community.cloudera.com/t5/Cloudera-Manager-Installation/Cloudera-manger-HBase-Deploy-Client-...

 

How can I fix this issue, please?

 

Thanks,

1 ACCEPTED SOLUTION

avatar
Explorer

Found the issue, yohooooooooo 

 

I already added the cloudera-scm to the wheel, 

sudo usermod -aG wheel cloudera-scm

 

 

But the issue was, we have to edit the visudo file and you need to uncomment the following:

sudo visudo
%wheel  ALL=(ALL)       NOPASSWD: ALL

For now, I have passed the DEPLOYING CLIENT CONFIGURATION step, now getting different errors in running and starting the services, like ZOOKEEPER....  😞

 

I will investigate and if I did not find any solution, I ask here.

 

 

View solution in original post

6 REPLIES 6

avatar
Expert Contributor
Hi,

This can be caused by the lack of /var/lib/alternatives/hadoop-conf in a specific host.

Did you try to restart cloudera agent service this could rebuild the alternatives

Run the below script to check the alternatives are linked properly.

ls -lart /etc/alternatives | grep "CDH" | while read a b c d e f g h i j k
do
alternatives --display $i
done

let us know if you have any questions

Thanks
Jerry

avatar
Explorer

Hi Jerry,

 

I do not have /var/lib/alternatives/hadoop-conf on any of my servers (NN and DNs).. I am not sure why it missed. How can I fix this issue?

I ran ur script , no output.

 

Can you please help me out? 

 

Thanks,

Mo

avatar
Explorer

Also I restart cloudera agent service, no result.!

avatar
Explorer

For example, here is the stdout and stderr of Hive deployment on one of my nodes:

 

There is no log to help me out to fix this isue, all required ports are open, reinstalled different versions, restarted the servers and services, no luck 😞

 

 

 

stdout:

Thu Dec 20 10:44:06 EST 2018
using /usr/java/jdk1.7.0_67-cloudera as JAVA_HOME
using 5 as CDH_VERSION
using /run/cloudera-scm-agent/process/ccdeploy_hive-conf_etchiveconf.cloudera.hive_603500038649041030 as CONF_DIR
using hive-conf as DIRECTORY_NAME
using /etc/hive/conf.cloudera.hive as DEST_PATH
using hive-conf as ALT_NAME
using /etc/hive/conf as ALT_LINK
using 90 as PRIORITY
using  as RUNNER_PROGRAM
using  as RUNNER_ARGS
using /usr/sbin/update-alternatives as UPDATE_ALTERNATIVES
Deploying service client configs to /etc/hive/conf.cloudera.hive

 

 

stderr:

  •  
    # Deploy client configuration on hadoopnn1.mydomain.com 
+ [[ hive-conf/hive-env.sh == \h\i\v\e\-\s\i\t\e\.\x\m\l ]]
+ perl -pi -e 's#{{HIVE_HBASE_JAR}}#/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hive/lib/hive-hbase-handler-1.1.0-cdh5.16.1.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-client.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-common.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-hadoop-compat.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-hadoop2-compat.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-protocol.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/hbase-server.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/lib/htrace-core-3.2.0-incubating.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/lib/htrace-core.jar,/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hbase/lib/htrace-core4-4.0.1-incubating.jar#g' /run/cloudera-scm-agent/process/ccdeploy_hive-conf_etchiveconf.cloudera.hive_603500038649041030/hive-conf/hive-env.sh
++ dirname /etc/hive/conf.cloudera.hive
+ ROOT_DIR_NAME=/etc/hive
+ '[' '!' -e /etc/hive ']'
+ sudo mkdir -p /etc/hive

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

sudo: no tty present and no askpass program specified


avatar
Explorer

I have also the following log:

 

( I have a lot of them in log file)

 

[20/Dec/2018 10:36:42 +0000] 21974 MainThread parcel ERROR Failed to activate alternatives for parcel CDH-5.16.1-1.cdh5.16.1.p0.3: 1
[20/Dec/2018 10:36:42 +0000] 21974 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'activate', 'hive', '/usr/bin/hive', 'bin/hive', '10', 'False']
[20/Dec/2018 10:36:44 +0000] 21974 MainThread parcel ERROR Failed to activate alternatives for parcel CDH-5.16.1-1.cdh5.16.1.p0.3: 1
[20/Dec/2018 10:36:44 +0000] 21974 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'activate', 'hadoop-0.20', '/usr/bin/hadoop-0.20', 'bin/hadoop-0.20', '10', 'False']
[20/Dec/2018 10:36:46 +0000] 21974 MainThread parcel ERROR Failed to activate alternatives for parcel CDH-5.16.1-1.cdh5.16.1.p0.3: 1
[20/Dec/2018 10:36:46 +0000] 21974 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'activate', 'hbase', '/usr/bin/hbase', 'bin/hbase', '10', 'False']
[20/Dec/2018 10:36:48 +0000] 21974 MainThread parcel ERROR Failed to activate alternatives for parcel CDH-5.16.1-1.cdh5.16.1.p0.3: 1
[20/Dec/2018 10:36:48 +0000] 21974 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'activate', 'sqoop-metastore', '/usr/bin/sqoop-metastore', 'bin/sqoop-metastore', '10', 'False']
[20/Dec/2018 10:36:50 +0000] 21974 MainThread parcel ERROR Failed to activate alternatives for parcel CDH-5.16.1-1.cdh5.16.1.p0.3: 1
[20/Dec/2018 10:36:50 +0000] 21974 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'activate', 'hadoop-fuse-dfs', '/usr/bin/hadoop-fuse-dfs', 'bin/hadoop-fuse-dfs', '10', 'False']
[20/Dec/2018 10:36:52 +0000] 21974 MainThread parcel ERROR Failed to activate alternatives for parcel CDH-5.16.1-1.cdh5.16.1.p0.3: 1
[20/Dec/2018 10:36:52 +0000] 21974 MainThread parcel INFO Executing command ['/usr/lib64/cmf/service/common/alternatives.sh', 'activate', 'hdfs', '/usr/bin/hdfs', 'bin/hdfs', '10', 'False']
[20/Dec/2018 10:36:54 +0000] 21974 MainThread parcel ERROR Failed to activate alternatives for parcel CDH-5.16.1-1.cdh5.16.1.p0.3: 1

avatar
Explorer

Found the issue, yohooooooooo 

 

I already added the cloudera-scm to the wheel, 

sudo usermod -aG wheel cloudera-scm

 

 

But the issue was, we have to edit the visudo file and you need to uncomment the following:

sudo visudo
%wheel  ALL=(ALL)       NOPASSWD: ALL

For now, I have passed the DEPLOYING CLIENT CONFIGURATION step, now getting different errors in running and starting the services, like ZOOKEEPER....  😞

 

I will investigate and if I did not find any solution, I ask here.