Member since
11-03-2017
94
Posts
13
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5300 | 04-11-2018 09:48 AM | |
1875 | 12-08-2017 01:07 PM | |
2402 | 12-01-2017 04:23 PM | |
11841 | 11-06-2017 04:08 PM |
06-25-2020
10:14 AM
Hi Sihi, I also encountered similar issue while enabling HA on name-node. We fall into this issue usually because we miss some instructions while enabling HA , like in my case I missed one step to create checkpoint on second NN. To overcome this thing , please try to start the HA enabling process once again from Ambari UI . And follow every step again correctly. It will again try to create secondary NN and you will be able to bring back the cluster to green state. Thanks Arun
... View more
11-15-2019
03:49 AM
Hello @JordanMoore @shashankvc @sihi_yassine , As per your requirement you want list of all external hive tables along with HDFS path location. Database name,Table name,Table Type(External) and HDFS Location of hive External tables. First login to Hive metastore and use the database which is storing all hive metadata Use 3 tables TBLS,DBS and SDS tables , on top of this 3 tables we can apply joins on DB_ID and SD_ID For more information and query output please check below link . https://askdoubts.com/question/how-to-find-out-list-of-all-hive-external-tables-and-hdfs-paths-from-hive-metastore/#comment-19 Thanks, Mahesh
... View more
07-03-2018
11:31 AM
@Yassine Yes, you could use Pandas and Matplotlib along with pyspark. For example you could use spark api to read data from cluster in parallel, process the data and then you could transform the spark dataframe to pandas and use matplotlib to show the results. There are other interactions but I think this may be the most common one I've seen.
... View more
04-01-2018
09:27 PM
@Yassine Looking at your log, it seems like you are trying to change the datatype in Spark. Is this the case? If yes, use the statement like val a = sqlContext.sql("alter table tableName change col col bigint") Talking about the issue you are facing while converting the type of the column, you need to understand the available datatypes and the implicit cast option available between them. So whenever you issue a command like alter table tableName change columnName columnName <newDataType>; You need to understand that you may have some data in your Hive table's column which is string type now and if you are casting to a variable with datatype like int etc, you may not be able to access certain values and they will generate null. Check this link for Hive datatypes and implicit cast options available.
... View more
03-01-2018
06:50 PM
@hema moger Great if it's a Linux server then create a passwordless login between the remote server and the edge node. First, update your /etc/hosts so that the remoter server is pingable from your edge node check the firewall rules and make sure you don't have a DENY Here is the walkthrough See attached pic1.jpg In my case the I have a centos server GULU and a Cloudera Quickstart VM running in Oracle VM virtual box because they are on the same network it's easy GULU Remote server: I want to copy the file test.txt which is located in /home/sheltong/Downloads [root@gulu ~]# cd /home/sheltong/Downloads [root@gulu Downloads]# ls
test.txt Edge node or localhost: [root@quickstart home]# scp root@192.168.0.80:/home/sheltong/Downloads/test.txt .
The authenticity of host '192.168.0.80 (192.168.0.80)' can't be established.
RSA key fingerprint is 93:8a:6c:02:9d:1f:e1:b5:0a:05:68:06:3b:7d:a3:d3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.80' (RSA) to the list of known hosts.
root@192.168.0.80's password:xxxxxremote_server_root_passwordxxx
test.txt 100% 136 0.1KB/s 00:00 Validate that the file was copied [root@quickstart home]# ls cloudera test.txt There you are I hope that helped
... View more
02-26-2018
12:35 PM
@Jay Kumar SenSharma nc: connect to 10.166.54.12 port 8020 (tcp) failed: Connection refused
tcp 0 0 10.166.54.12:8020 0.0.0.0:* LISTEN 19578/java
... View more
02-25-2018
08:34 PM
@Yassine If this answers your query then please mark this thread as answered by
clicking on the "Accepted" button that way other HCC users can quickly
browser the answered queries.
... View more
01-22-2018
03:00 PM
1 Kudo
@yassine sihi, There is a json file (role_command_order.json) which specifies the dependencies of starting/stopping the services. If there is no dependency then the start/stop of services between hosts will run in parallel. You can find the files by running the command in ambari server node find /var/lib/ambari-server/resources -iname role_command_order.json Files inside common-services(/var/lib/ambari-server/resources/common-services) specify the dependency at a service level whereas files inside (/var/lib/ambari-server/resources/stacks) specify overall dependencies at stack level. Consider this sample line in one of the file "LIVY_SERVER-START" : ["NAMENODE-START", "DATANODE-START", "APP_TIMELINE_SERVER-START"] This specifies that livy server start is dependent on namenode, datanode and app timeline server start. Hope this helps 🙂 Thanks, Aditya
... View more
01-08-2018
11:35 AM
1 Kudo
@yassine sihi, Try removing the file manually and install hbase rm -rf /usr/hdp/2.5.0.0-1245/hbase/conf Thanks, Aditya
... View more