Member since
03-07-2019
158
Posts
53
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6350 | 03-08-2019 08:46 AM | |
4337 | 10-17-2018 10:25 AM | |
2771 | 10-16-2018 07:46 AM | |
2118 | 10-16-2018 06:57 AM | |
1760 | 10-12-2018 09:55 AM |
10-03-2018
06:46 AM
Hi @Anurag Mishra Spark keeps intermediate files in /tmp, where it likely ran out of space. You can either adjust spark.local.dir or set this at submission time, to a different directory with more space. Try the same job while adding in this during spark-submit; --conf "spark.local.dir=/directory/with/space" If that works well, you can change this permanently by adding this property to the custom spark defaults in ambari; spark.local.dir=/directory/with/space See also: https://spark.apache.org/docs/latest/configuration.html#application-properties
... View more
10-02-2018
02:50 PM
Hi @Eddie
Try like so; yarn jar My.jar Myclass.class -Dmapreduce.map.java.opts="-Xss5M" \
-Dmapreduce.map.memory.mb=6000 \
etc I think the agentlib setting should be set through YARN_OPTS, so you could append this on Ambari -> Yarn -> Config -> Advanced Yarn env. Near the bottom of the yarn-env template you'll notice various yarn_opts being set. We can add this; YARN_OPTS="$YARN_OPTS -agentlib:jdwp=transport=dt_socket,server=y,address=8787" Unsure if the agentlib can be set upon CLI submit, it didn't work for me anyway and had to add the above in to the yarn-env template. I hope this helps.
... View more
10-02-2018
01:58 PM
Hi @Ranganathan G T You're looking at the full HDP installations there. If you want just the HDP sandbox, you can download it here; https://hortonworks.com/downloads/#sandbox
... View more
10-02-2018
09:53 AM
Hi @Ho Chi I realize this is an old post and you've likely gotten this working by now but looks like the post still receives enough views to seem relevant for a number of others. I ran into the same when following the same tutorial, so to fix make sure that 1. there is no trailing whitespace on the line, and 2 that the file is in unix format (ex. open the file using vi and run :set ff=unix) to get rid of any carriage return characters.
... View more
10-01-2018
02:00 PM
Hi @Shrikant BM This is sort of expected following taking a few snapshots, the /apps/hbase/data/archive location stores raw data for your snapshots but also holds HFiles that are left over after a compaction operation, these are normally cleared automatically unless they are referenced by a snapshot in which case they will remain in the archive folder, so the more snapshots you keep the more archive folder will grow. You can clean older snapshots you don't need anymore. If you need to keep them, you may want to be making incremental backups rather than snapshots.
... View more
10-01-2018
06:47 AM
@Vivek Singh
Unfortunately that information isn't in zookeeper, so we can't get it from there unless you write the information to zookeeper yourself (the solution might get overly complicated now). If you really must avoid REST you could try to query the ambari database to list the installed services for example like so; psql ambari -U ambari -W -p 5432 -c "select * from clusterservices"
... View more
09-28-2018
12:53 PM
1 Kudo
@Vivek Singh As Phil suggested, REST is a good way to do this. Check all services installed; curl -u admin:admin -H "X-Requested-By: ambari"-X GET http://{your-ambari-server}:8080/api/v1/clusters/{clustername}/services/ Then check the status of a service, for example Spark 2; curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://{your-ambari-server}:8080/api/v1/clusters/{clustername}/services/SPARK2 Look for the 'state' from this output, should be showing either 'installed' (stopped) or 'started'. You can substitute services for components to check on components.
... View more
09-24-2018
06:56 AM
Hi @Anurag Mishra Can you try the distcp with this extra parameter; -Dmapreduce.job.hdfs-servers.token-renewal.exclude=remotenamenode1,remotenamenode2
... View more
09-14-2018
07:16 AM
@Michael Graml Setting multiple livy2 servers on zeppelin.livy.url is not supported at the moment. As alternative, you could point to a load balancer for now.
... View more
09-12-2018
10:00 AM
@Sandeep Nemuri Thank you sir!
... View more