Member since
05-29-2017
408
Posts
123
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2785 | 09-01-2017 06:26 AM | |
1697 | 05-04-2017 07:09 AM | |
1459 | 09-12-2016 05:58 PM | |
2060 | 07-22-2016 05:22 AM | |
1626 | 07-21-2016 07:50 AM |
09-27-2016
10:32 AM
@jk: There was some network slowness and it was getting timeout as well. So once that is resolved I ran your above give steps this problem resolved. It is being installed now without any other error.
... View more
09-22-2016
05:40 AM
@Sowmya Ramesh So as of now we can't stop or suspend all entity at once. Is there any plan in future to implement such features ?
... View more
09-12-2016
12:21 PM
When you create a database or internal tables in hive cli then by default it creates with 777 permission.Even though if you have umask in hdfs then also it will be same permission. But now you can change it with the help of following steps. 1.From the command line in the Ambari server node, edit the file vi /var/lib/ambari–server/resources/common–services/HIVE/0.12.0.2.0/package/scripts/hive.py Search for hive_apps_whs_dir which should go to this block: params.HdfsResource(params.hive_apps_whs_dir, type=“directory”, action=“create_on_execute”, owner=params.hive_user, group=params.user_group, mode=0755 ) 2. Modify the value for mode from 0777 to the desired permission, for example 0750.Save and close the file. 3. Restart the Ambari server to propagate the change to all nodes in the cluster: ambari–server restart 4. From the Ambari UI, restart HiveServer2 to apply the new permission to the warehouse directory. If multiple HiveServer2 instances are configured, any one instance can be restarted. hive> create database test2; OK Time taken: 0.156 seconds hive> dfs -ls /apps/hive/warehouse; Found 9 items drwxrwxrwx – hdpuser hdfs 0 2016-09-08 01:54 /apps/hive/warehouse/test.db drwxr-xr-x -hdpuser hdfs 0 2016-09-08 02:04 /apps/hive/warehouse/test1.db drwxr-x— -hdpuser hdfs 0 2016-09-08 02:09 /apps/hive/warehouse/test2.db I hope this will help you to serve your purpose.
... View more
Labels:
09-07-2016
05:22 PM
@Saurabh Kumar Changing the warehouse directory permission
1.From the command line in the Ambari server node, edit the file /var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
Search for hive_apps_whs_dir which should go to this block: params.HdfsResource(params.hive_apps_whs_dir,
type="directory",
action="create_on_execute",
owner=params.hive_user,
group=params.user_group,
mode=0755
)
Modify the value for mode from 0755 to the desired permission, for example 0777. Save and close the file.
Restart the Ambari server to propagate the change to all nodes in the cluster: ambari-server restart
It may take a few seconds to update the file in the Ambari agents on all nodes. To verify if the change has been applied on a particular node, check the content of hive.py in /var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py
From the Ambari UI, restart HiveServer2 to apply the new permission to the warehouse directory. If multiple HiveServer2 instances are configured, any one instance can be restarted. Hope this helps you.
... View more
08-31-2016
01:05 PM
save password in text file and connect as below:) [root@sandbox ~]# beeline -u 'jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' -n rajesh -w pass WARNING: Use "yarn jar" to launch YARN applications. Connecting to jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 Connected to: Apache Hive (version 1.2.1000.2.5.0.0-817) Driver: Hive JDBC (version 1.2.1000.2.5.0.0-817) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 1.2.1000.2.5.0.0-817 by Apache Hive 0: jdbc:hive2://sandbox.hortonworks.com:2181/>
... View more
08-03-2016
06:23 AM
Thanks @mqureshi. It is working now when I changed my jar to solr-yarn.jar. Actually earlier I was using separate jar program to create solr index. As Kuldeep mentioned I tried that property and it worked for my old program as well.
... View more
08-01-2016
12:01 PM
1 Kudo
There is situation when unfortunately and unknowingly you delete /hdp/apps/2.3.4.0-3485 either with skipTrash or without skipTrash then you will be in trouble and other services will be impacted. You will not be able to run hive,mapreduce or sqoop command, You will get following error. Case 1: If you deleted it without skipTrash then it is very easy to recover: [root@m1 ranger-hdfs-plugin]# hadoop fs -rmr /hdp/apps/2.3.4.0-3485 rmr: DEPRECATED: Please use ‘rm -r’ instead. 16/07/28 01:59:22 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes. Moved: 'hdfs://HDPTSTHA/hdp/apps/2.3.4.0' to trash at: hdfs://HDPTSTHA/user/hdfs/.Trash/Current In this case it would be very easy to recover it as after deleting it goes to your current dir and you can recover it from there. hadoop fs -put hdfs://HDPTSTHA/user/hdfs/.Trash/Current//hdp/apps/2.3.4.0 /hdp/apps/
Case 2: If you deleted it with -skipTrash then you need to execute following steps: [root@m1 ranger-hdfs-plugin]# hadoop fs -rmr -skipTrash /hdp/apps/2.3.4.0-3485 rmr: DEPRECATED: Please use ‘rm -r’ instead. Deleted /hdp/apps/2.3.4.0-3485 So when I am trying to access to hive it is throwing below error. [root@m1 admin]# hive WARNING: Use “yarn jar” to launch YARN applications. 16/07/27 22:05:04 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties Exception in thread “main” java.lang.RuntimeException: java.io.FileNotFoundException: File does not exist: /hdp/apps/2.3.4.0-3485/tez/tez.tar.gz at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:507) Resolution: Don’t worry friends you can resolve this issue by following give steps. Note: You have to replace version of your hdp. Step 1: First you will have to create following required dirs : hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/mapreduce hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/hive hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/tez hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/sqoop hdfs dfs -mkdir -p /hdp/apps/<2.3.4.0-$BUILD>/pig Step 2: Now you have to copy required jars in related dir. hdfs dfs -put /usr/hdp/2.3.4.0-$BUILD/hadoop/mapreduce.tar.gz /hdp/apps/2.3.4.0-$BUILD/mapreduce/
hdfs dfs -put /usr/hdp/2.3.2.0-<$version>/hive/hive.tar.gz /hdp/apps/2.3.2.0-<$version>/hive/ hdfs dfs -put /usr/hdp/<hdp_version>/tez/lib/tez.tar.gz /hdp/apps/<hdp_version>/tez/ hdfs dfs -put /usr/hdp/<hdp-version>/sqoop/sqoop.tar.gz /hdp/apps/<hdp-version>/sqoop/ hdfs dfs -put /usr/hdp/<hdp-version>/pig/pig.tar.gz /hdp/apps/<hdp-version>/pig/ Step 3: Now you need to change dir owner and then change permission: hdfs dfs -chown -R hdfs:hadoop /hdp hdfs dfs -chmod -R 555 /hdp/apps/2.3.4.0-$BUILD Now you will be able to start your hive CLI or other jobs. [root@m1 ~]# hive WARNING: Use “yarn jar” to launch YARN applications. 16/07/27 23:33:42 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties hive> I hope it will help you to restore your cluster. Please feel free to give your suggestion
... View more
Labels:
08-01-2016
06:41 AM
3 Kudos
@Saurabh Kumar
Please have a look at below documents, this information is useful for recovery https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_upgrading_hdp_manually/content/configure-yarn-mr-22.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_upgrading_hdp_manually/content/start-webhcat-20.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_upgrading_hdp_manually/content/start-tez-22.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_installing_manually_book/content/upload_pig_hive_sqoop_tarballs_to_hdfs.html
... View more
07-31-2016
03:44 AM
@Saurabh Kumar - Nice Article! P.S - I have removed username and replaced it with $user in the logs.
... View more