Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2720 | 12-06-2018 12:25 PM | |
| 2860 | 11-27-2018 06:00 PM | |
| 2193 | 11-22-2018 03:42 PM | |
| 3565 | 11-20-2018 02:00 PM | |
| 6271 | 11-19-2018 03:24 PM |
10-26-2017
05:34 PM
@Facundo Bianco Can you please try running the curl command mentioned above. That should install the client
... View more
10-26-2017
05:32 PM
@Nirmal J, From the above log looks this there is an issue with /etc/hosts configuration. Can you please try telnet <namenode-host> 8020 The datanode is pointing to localhost/127.0.0.1:8020 instead of the name node host. Thanks, Aditya
... View more
10-26-2017
05:06 PM
@Facundo Bianco, Seems like spark2 client is not installed. Can you please check this directory (/usr/hdp/2.6.2.0-205/spark2/conf/) . If the directory is not empty you can copy files from the mentioned directory to /etc/spark2/2.6.2.0-205/0. If the directory is empty , try installing the client and check the directory yum install spark2_2_6_2_0_205-master spark2_2_6_2_0_205-python You can also install the client using REST API curl -k -u {username}:{password} -H "X-Requested-By:ambari" -i -X PUT -d '{"HostRoles": {"state": "INSTALLED"}}' http://{ambari-host}:{ambari-port}/api/v1/clusters/{clustername}/hosts/{hostname}/host_components/SPARK2_CLIENT Replace ambari-host, ambari-port , clustername, hostname,username and password with your values Thanks, Aditya
... View more
10-26-2017
04:50 PM
@uri ben-ari, 746.5Gb is the total memory of all node managers included. It is calculated as (yarn.nodemanager.resource.memory-mb) * (no of node managers) . Please check the value for yarn.nodemanager.resource.memory-mb in your YARN configs. Check the screenshot 575 GB is the memory used by all the applications. You can check the Resource manager UI to see which apps are using memory. In the screenshot, you can see 3 apps running with one app taking 9Gb, and two apps taking 1Gb each. Therefore the total is 11Gb. You can check the similar in your environment. Hope this helps. Thanks, Aditya
... View more
10-25-2017
05:39 PM
@Imre Ruskal
Can you please try doing the below from history server node and see the response telnet ip-172-31-17-42.eu-central-1.compute.internal 50070 Also, can you please tell the /etc/hosts entries in the name node box
... View more
10-25-2017
04:18 PM
@Imre Ruskal, From the logs, looks like the namenode is not running. Can you please login to the name node box and run netstat -tupln | grep 50070 Try starting the namenode and try runing it. Thanks, Aditya
... View more
10-25-2017
03:45 PM
@Viswa, I'm not aware of any way to create as a file directly. Only option I can think of is to create a single part file and rename it as required.
... View more
10-25-2017
03:25 PM
@Viswa, rdd.saveAsTextFile will accept the path as input and will create part files inside the folder. If you want to write output to a side file inside the folder then you can use rdd.coalesce(1).saveAsTextFile('/path/finename.txt') Thanks, Aditya
... View more
10-25-2017
01:17 PM
@Thierry Vernhet, After running the first command targetdirectory will be renamed to x. So mv /mydirectory /targetdirectory is not /targetdirectory/mydirectory , instead it will just rename mydirectory to targetdirectory since the destination directory doesn't exist. So, if targetdirectory has less files this is an option.Instead of moving 30k files, you can move less files Thanks, Aditya
... View more
10-25-2017
12:29 PM
2 Kudos
@Thierry Vernhet, If there are less files in /targetdirectory than the /mydirectory , you can do the below hdfs dfs -mv /targetdirectory /x
hdfs dfs -mv /mydirectory /targetdirectory
hdfs dfs -mv /x/* /targetdirectory Thanks, Aditya
... View more