Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 732 | 06-04-2025 11:36 PM | |
| 1305 | 03-23-2025 05:23 AM | |
| 645 | 03-17-2025 10:18 AM | |
| 2363 | 03-05-2025 01:34 PM | |
| 1532 | 03-03-2025 01:09 PM |
04-10-2018
12:16 AM
@Nikhil Vemula Please check here
... View more
04-10-2018
09:49 AM
Yes, it works. thanks @Geoffrey Shelton Okot. But the preperty should be added to Customer core-site, not Advanced core-site Ambari > HDFS > Configs>Custom core-site > Add Property. Seems this config cause many similar issues, why HDP does not add this to config for enable kerberos automatically ?
... View more
04-13-2018
08:15 AM
I solved this problem by substituting public IPs by private IPs in /etc/hosts.
... View more
04-09-2018
10:00 AM
Tq ......................................
... View more
04-08-2018
03:59 PM
@Siddharth Mishra Good to know it's always important to scrutinise the logs
... View more
12-17-2018
06:55 AM
@Geoffrey Shelton Okot What you have mentioned is for any specific path. Do you have any procedure through which we can change the block size of existing cluster which includes all old files ? Please post it here if any you are aware!!
... View more
04-05-2018
11:22 AM
@Juan Gonzalez Thank you were are there to help each other ...... So it's better you close this thread as it's now irrelevant and very long until you upgrade your memory. You can accept any of the responses if that helped you. Be assured HCC is a nice place to get help a lot of enthusiastic guys in here .
... View more
04-03-2018
07:46 AM
@Michael Bronson Yes, I think the steps are correct, but I think for better understanding you add a step between 2 and 3 :-). Mounting the new FS and updating the fstab before copying across the data from the old mount point. Cheers 🙂
... View more
04-01-2018
05:15 PM
@Aishwarya Sudhakar Could you clarify which username under which you are running the spark under? Because of its distributed aspect, you should copy the dataset.csv to HDFS users directory which is accessible to that user running the spark job. According to your output above you file is HDFS directory /demo/demo/dataset.csv so your load should look like this load "hdfs:////demo/demo/dataset.csv" This is what you said. "The demo is the directory that is inside hadoop. And datset.csv is the file that contains data." Did you mean in HDFS? Does the command print anything $ hdfs dfs -cat /demo/demo/dataset.csv Please revert !
... View more
03-28-2018
08:11 PM
@Michael Bronson You will need first to identify the Ambari Service and Component name to be used in the API, this for sure will also bring down the Metrics collector curl -u admin:admin get http://<AMBARI_SERVER>:8080/api/v1/clusters/<CLUSTER_NAME>/services Replace the particular service below <Service_name> with the previous output eg AMBARI_METRICS Stop AMBARI_METRICS curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop service "}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://<AMBARI_SERVER_HOSTNAME>:8080/api/v1/clusters/<CLUSTER_NAME>/services/<Service_name>; The service will stop check the Ambari UI Hope that helps!!
... View more