Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 611 | 06-04-2025 11:36 PM | |
| 1177 | 03-23-2025 05:23 AM | |
| 584 | 03-17-2025 10:18 AM | |
| 2186 | 03-05-2025 01:34 PM | |
| 1375 | 03-03-2025 01:09 PM |
11-18-2019
11:24 PM
@divya_thaore 1. If you are starting service via Ambari / Cloudera Manager UI then check the operation logs which are displayed in UI while you start the service Please check for any errors in the logs clicking the service operational logs. 2. If you do not see any operational logs or any operation triggerred while you start/restart service then kindly restart agent service once [ambari agent/cloudera-scm-agent] 3. Else please check logs from cli as suggested by @Shelton If you still need help please revert.
... View more
11-18-2019
11:09 PM
@mike_bronson7 The latest command you posted again has typo. "R" is missing at the end in below command - >>curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVE" Pls try and pass new error if any.
... View more
11-16-2019
05:02 PM
Thank you for the info. Yes, I have created with backup with another directory and I was about to boot restart the namenode from that image.
... View more
11-15-2019
05:32 AM
Quick update: I worked with @fgarcia to edit the post in question. 🙂
... View more
11-14-2019
08:17 PM
@deekshant To debug Namenode issue you need to check below - 1. Check active namenode[NN] logs [for time when it got reboot] 2. Check active NN zkfc logs [same time - if you see any issue] 3. Check for standby NN logs at same time if you see any error 4. Check for standby NN zkfc logs for any error at same timestamp 5. Check for Active NN .out file for any warnings/error 6. Check for system logs "/var/log/message" for any issue at particular moment of time. You will find error in one of the above file. accordingly you can go for RCA. Do revert if you need further help.
... View more
11-11-2019
12:22 PM
@Rak You have a couple of errors in your sqoop syntax but you are almost there, please have a look at the below hints and retry after understanding and correcting them. 1. Sqoop import--connect This is wrong you need a space between the import and -- ie sqoop import --connect 2. 'jdbc:sqlserver'--username is also not correct you need a port number and databases name ie "jdbc:sqlserver://<Server_Host>:<Server_Port>;databaseName" 3. The quotes around the '2019-11-08'" is wrong too 3. All you -- should have a space before sqoop import--connect 'jdbc:sqlserver'--username 'sa' -P--query "select * from dlyprice where $CONDITIONS AND `date`= '2019-11-08'"--split-by `date--target-dir /home/hduser2 -m 2 Try something like this ! sqoop import --connect --connect "jdbc:sqlserver://<Server_Host>:<Server_Port>;DB_Name>" \ --driver com.microsoft.sqlserver.jdbc.SQLServerDriver \ --username XXXX -P --query "select * from dlyprice where $CONDITIONS AND `date`= '2019-11-08' \ --split-by `date` --target-dir /home/hduser2 -m 2 The above isn't tested by I wanted to highlight some of your mistakes by making it work makes you a better hadooper ! Here is a link to Sqoop User Guide Can you try out this syntax remember to replace the values with that of your environment sqoop import --connect "jdbc:sqlserver://hostname;username='sa';password='sa_password';database=yourDB" --driver com.microsoft.sqlserver.jdbc.SQLServerDriver --query "select * from dlyprice where \$CONDITIONS" --split-by date -m 2 --target-dir /home/hduser2 Please revert
... View more
11-10-2019
11:47 PM
@wimster @Zeba Surely thats great but during the webinar niether Lakshmi Randall, Wim Stoop nor Matthew could commit to Cloudera's release date hence the date before end of year 🙂 something understandable incase the release date is missed that won't be a good image. As an insider I am sure you have more information can you share the link to that release info I would like to test drive it this week.
... View more
11-09-2019
05:07 PM
@Shelton Glad that you reproduce this problem. I have ten brokers and every broker is configured with their respective IP addresses but I am afraid I cannot provide screenshot for some reasons. I have a workaround for this problem. In my own opinion, the reason of this problem is that Ambari cannot recognize IP and port bindings correctly. So to solve this is to avoiding the challenge for it. Here is my configuration now: listeners=SASL_PLAINTEXT://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092
advertised.listeners=SASL_PLAINTEXT://192.168.1.1:9092,EXTERNAL://88.88.88.88:19092 The other configurations stay unchanged. Notice that I change `listeners` so that even Ambari cannot recognize the right IP and port binding, both ports are now available on all interfaces so there are no false alerts now. Thanks you so much for helping me on this problem and apologize for this late response.
... View more
11-07-2019
09:44 PM
@Manoj690 Can you check whether authorization has been delegated to Ranger/Kerbe/SQLAuth if you have Ranger plugin for Hive enabled then the authorization has been delegated to Ranger the central authority. You will need to enable the permissions through ranger for all hive database Hive > Configs > Settings > In Security it is set to ?
... View more