Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2693 | 11-01-2016 05:43 PM | |
| 9142 | 11-01-2016 05:36 PM | |
| 5030 | 07-01-2016 03:20 PM | |
| 8440 | 05-25-2016 11:36 AM | |
| 4612 | 05-24-2016 05:27 PM |
02-13-2016
01:41 AM
@Kirill Elsukov Try this cd /usr/lib/python2.6/site-packages/ ln -s /usr/lib/ambari-agent/lib/ambari_commons ambari_commons ln -s /usr/lib/ambari-agent/lib/resource_management resource_management ln -s /usr/lib/ambari-agent/lib/ambari_jinja2 ambari_jinja2 ambari-agent restart
... View more
02-12-2016
10:40 PM
@Prakash Punj Login as root in the box This is what I suggest to customer / - whatever company policy each node /usr/hd - Binaries ~ 30 to 50GB /var/log - logs ~200GB /hadoop or whatever mount point - Size based on customer use case In your case , you can increase the volume size on the fly as it's vm or whatever works best for you. I am not big fan of symlinks.
... View more
02-12-2016
10:31 PM
@Prakash Punj Once done then refresh browser and you can add AMS again AMS added again
... View more
02-12-2016
10:29 PM
@Prakash Punj Try curl --user admin:admin -i -H "X-Requested-By: ambari" -X DELETE http://`hostname -f`:8080/api/v1/clusters/CLUSTERNAME/services/AMBARI_METRICS for example: I used this for sandbox curl --user admin:admin -i -H "X-Requested-By: ambari" -X DELETE http://`hostname -f`:8080/api/v1/clusters/Sandbox/services/AMBARI_METRICS
... View more
02-12-2016
07:59 PM
install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. @Rahul Tikekar
... View more
02-12-2016
07:54 PM
@Rahul Tikekar Go to /usr/hdp/current ls spark* You can all the details related to spark
... View more
02-12-2016
07:27 PM
@Michel Sumbul The above jira provides the detail history on throttling. Also, see if you can research on following parameter if you are using phoenix phoenix.query.maxTenantMemoryPercentage https://phoenix.apache.org/tuning.html
... View more
02-12-2016
05:40 PM
@Satish S Please accept the answer to close the thread 😉
... View more
02-12-2016
05:39 PM
1 Kudo
@Juan Manuel Perez login as root in your server su - hdfs hdfs dfs -mkdir -p /user/root hdfs dfs -chown -R root:hdfs /user/root
... View more
02-12-2016
05:33 PM
1 Kudo
@Satish S Final try...I will try to reproduce if it does not work. sqoop import --connect jdbc:mysql://localhost/test --username sat --query 'select emp_no, salary from salaries where $CONDITIONS and salary >8000’ --split-by emp_no --target-dir /user/cloudera/tmp/
... View more