Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3067 | 06-30-2017 05:30 PM | |
3988 | 06-30-2017 02:57 PM | |
3309 | 05-30-2017 07:00 AM | |
3884 | 01-20-2017 10:18 AM | |
8401 | 01-11-2017 02:11 PM |
05-29-2016
03:23 AM
1 Kudo
@Ankit Tripathi The best way is to enable debug in the file /usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/log4j.xml Just replace all lines with - <priority value="info" /> ----> <priority value="debug" /> Restart ranger. Try test connection and same time do "tail -f /var/log/ranger/admin/xa-portal.log"
... View more
05-28-2016
03:49 PM
@atul kumar If you run the job it will give you job_id. If the job_id is something like job_<12312324233242> then assume that oyur job is running in HDFS mode. If the job_id is job_local** then its local mode.
... View more
05-28-2016
01:13 PM
@Simon Wang
1. For the Error "https://hdp.localdomain.localdomain.localdomain.localdomain.localdomain:8440" - Try upgrading openssl to latest version. $yum upgrade openssl My working openssl version is - openssl-1.0.1e-15.el6.x86_64 2. Also please modify /etc/hosts file as below - and give retry 127.0.0.1 localhost.localdomain localhost ::1 localhost.localdomain localhost 192.168.1.115 hdp
... View more
05-28-2016
01:02 PM
@Ankit Tripathi Can you check below properties if any change - <property>
<name>hadoop.kms.authentication.type</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.kms.authentication.kerberos.keytab</name>
<value>${user.home}/kms.keytab</value>
</property>
<property>
<name>hadoop.kms.authentication.kerberos.principal</name>
<value>HTTP/localhost</value>
</property>
<property>
<name>hadoop.kms.authentication.kerberos.name.rules</name>
<value>DEFAULT</value>
</property>
<property>
<name>hadoop.kms.proxyuser.#USER#.users</name>
<value>*</value>
</property>
<property>
<name>hadoop.kms.proxyuser.#USER#.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.kms.proxyuser.#USER#.hosts</name>
<value>*</value>
</property> Make sure you have policy for user to get keys in ranger kms admin UI.
... View more
05-28-2016
12:24 PM
@Tajinderpal Singh A job stuck in accepted state on YARN is usually because of free resources are not enough. You can check it at http://resourcemanager:port/cluster/scheduler : if Memory Used + Memory Reserved >= Memory Total , memory is not enough if VCores Used + VCores Reserved >= VCores Total , VCores is not enough It may also be limited by parameters such as maxAMShare Follow the blog -http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/ This describes in detail how to set the parameters for YARN containers. Check below parameters - 1) yarn-site.xml yarn.resourcemanager.hostname = hostname_of_the_master yarn.nodemanager.resource.memory-mb = 4000 yarn.nodemanager.resource.cpu-vcores = 2 yarn.scheduler.minimum-allocation-mb = 4000 2) mapred-site.xml yarn.app.mapreduce.am.resource.mb = 4000 yarn.app.mapreduce.am.command-opts = -Xmx3768m mapreduce.map.cpu.vcores = 2 mapreduce.reduce.cpu.vcores = 2
... View more
05-28-2016
04:00 AM
1 Kudo
Hi @Mahendra More 1. The best way is the download sandbox and get started. Please download sandbox using - http://hortonworks.com/downloads/ Try downloading for Virtual Box / Vmware 2. Try to start with HDP tutorials - http://hortonworks.com/tutorials/ Start with "Hadoop Administration" practicals. 3. If you are looking for Ambari installation then pls check latest doc - http://docs.hortonworks.com/HDPDocuments/Ambari/Ambari-2.2.2.0/index.html For HDP - http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/index.html Feel free to revert for any further question.
... View more
05-27-2016
01:20 PM
1 Kudo
Problem Statement: When you try to execute GET call using ambari api to list/GET services, it usually gives error as shown below - # curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://<AMBARI_SERVER_HOST>:8080/api/v1/clusters/<cluster_name>/services/
curl: (1) Protocol http not supported or disabled in libcurl
OR # curl -u admin:admin -H "X-Requested-By: ambari" -X GET “http://node1.example.com:8080/api/v1/clusters/HDP_TEST/services/“
curl: (1) Protocol “http not supported or disabled in libcurl
Solution: It’s a tipical curl_php error, but, the error response is not very, ehmmmm easy to deduce. It’s simple, surely there is an extra space before ‘http’, so check the CURLOPT_URL declaration, and search for this space, and then, delete it!!!!
Make sure the syntax of double quotes is correct. This also leads to the error.
... View more
Labels:
05-26-2016
11:56 AM
@prathap t You cannot apply patch. The BUG will be taken care in upcomming version of HDP. There might be relavent BUG filed in hortonworks jira.
... View more
05-26-2016
11:31 AM
@prathap t I see this is similar to the BUG reported - https://issues.apache.org/jira/browse/HIVE-11205 Also check this, this might help you work -http://www.openkb.info/2015/03/how-to-enable-hive-default-authorization.html Make sure you have below property properly set and grants in place - < property >
< name >hive.security.authorization.enabled</ name >
< value >true</ value >
< description >enable or disable the hive client authorization</ description >
</ property >
< property >
< name >hive.security.authorization.createtable.owner.grants</ name >
< value >ALL</ value >
< description >the privileges automatically granted to the owner whenever a table gets created.
An example like "select,drop" will grant select and drop privilege to the owner of the table</ description >
</ property >
... View more
05-26-2016
11:26 AM
@Venkadesh Sivalingam Can you check in logs if you see any error [Check in both Yarn and NM logs] usually search for parameters -org.apache.hadoop.yarn.logaggregation I see there are few bugs already with log aggregation which are fixed in HDP 2.2 and ahead- BUG-12006 https://issues.apache.org/jira/browse/YARN-2468 What is the version of HDP you are using ? Also Can you make sure those property are in place and set correctly - PROPERTIES RESPECTED WHEN LOG-AGGREGATION IS ENABLED
yarn.nodemanager.remote-app-log-dir: This is on the default file-system, usually HDFS and indictes where the NMs should aggregate logs to. This should not be local file-system, otherwise serving daemons like history-server will not able to serve the aggregated logs. Default is /tmp/logs. yarn.nodemanager.remote-app-log-dir-suffix: The remote log dir will be created at {yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}. Default value is “logs””. yarn.log-aggregation.retain-seconds: How long to wait before deleting aggregated-logs, -1 or a negative number disables the deletion of aggregated-logs. One needs to be careful and not set this to a too small a value so as to not burden the distributed file-system. yarn.log-aggregation.retain-check-interval-seconds: Determines how long to wait between aggregated-log retention-checks. If it is set to 0 or a negative value, then the value is computed as one-tenth of the aggregated-log retention-time. As with the previous configuration property, one needs to be careful and not set this to low values. Defaults to -1. yarn.log.server.url: Once an application is done, NMs redirect web UI users to this URL where aggregated-logs are served. Today it points to the MapReduce specific JobHistory.
... View more