Member since
10-04-2016
243
Posts
281
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1171 | 01-16-2018 03:38 PM | |
6139 | 11-13-2017 05:45 PM | |
3032 | 11-13-2017 12:30 AM | |
1518 | 10-27-2017 03:58 AM | |
28426 | 10-19-2017 03:17 AM |
09-13-2017
08:17 PM
3 Kudos
After Kerberos has been enabled, I was not able to open Hive View from Ambari. I would get the following error message: Issue detected Service 'userhome' check failed: Usernames not matched: name=root != expected=ambari-server-<clusterName> Service 'userhome' check failed:
java.io.IOException: Usernames not matched: name=root != expected=ambari-server-<clusterName>
at sun.reflect.GeneratedConstructorAccessor248.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) Root cause Ambari server is running as root so it tries to authenticate with a proxy user as 'root' where as the ambari.service.keytab expects a principal as ambari-server-<clusterName>@REALM. Hence the mismatch. Solution 1. Edit the view settings Go to Edit View page on Ambari : Manage Ambari > Views > Hive > Hive View Or simply: http://<ambariHost:port>/views/ADMIN_VIEW/<ambari.version>/INSTANCE/#/views/HIVE/versions/<view.version>/instances/AUTO_HIVE_INSTANCE/edit Substitute the values of <ambariHost:port> , <ambari.version> , <view.version> as needed for example: http://my.ambari.com:8080/views/ADMIN_VIEW/2.5.2.0/INSTANCE/#/views/HIVE/versions/1.5.0/instances/AUTO_HIVE_INSTANCE/edit Under Settings section: Update the value: WebHDFS Authentication to auth=KERBEROS;proxyuser=ambari-server-<clusterName> Save the changes. 2. Update configs: Navigate to Hive and YARN Configs in Ambari UI and change as below and restart respective services. <AMBARI_SERVER_PRINCIPAL_USER> should be replaced by ambari-server-<clusterName> A) Custom webhcat-site webhcat.proxyuser.<AMBARI_SERVER_PRINCIPAL_USER>.groups=*
webhcat.proxyuser.<AMBARI_SERVER_PRINCIPAL_USER>.hosts=*
B) Custom yarn-site yarn.timeline-service.http-authentication.<AMBARI_SERVER_PRINCIPAL_USER>.groups=*
yarn.timeline-service.http-authentication.<AMBARI_SERVER_PRINCIPAL_USER>.hosts=*
yarn.timeline-service.http-authentication.<AMBARI_SERVER_PRINCIPAL_USER>.users=*
... View more
Labels:
09-07-2017
03:37 AM
@Jonathan Hurley it is yarn_app_timeline_server_webui alert. for some reason alert_source had incorrect value.
... View more
08-31-2017
02:05 AM
3 Kudos
Just verified - HDP 2.6.1 has Hadoop/HDFS 2.7.3 https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_release-notes/content/comp_versions.html The issue you have reported is a bug that has been fixed in HDFS 2.8.0 as per this Apache JIRA: https://issues.apache.org/jira/browse/HDFS-8805
... View more
08-25-2017
05:35 PM
3 Kudos
The way Ambari has been designed is that during the upgrade it will backup all paths listed under dfs.namenode.name.dir Thus, currently there is no feature that let's you select a particular file to backup instead of all. This could be a new feature!
... View more
08-24-2017
10:12 PM
1 Kudo
Thank you..!! It worked.
... View more
08-28-2017
07:49 PM
Thank you @Nandish B Naidu..!! The solution worked.
... View more
08-15-2017
07:40 AM
Finally I found the problem: I didn't open 5901 port from sandbox to docker. Here is tutorial which describes in details how to do it: https://community.hortonworks.com/articles/65914/how-to-add-ports-to-the-hdp-25-virtualbox-sandbox.html
... View more
08-16-2017
06:48 PM
You are so amazing I really appreciate each of your comments and the time that you have put on. thanks so much. Just to let you know buddy the part that I forgot to tell you is that before going to pig I load the file information in a Hive table within the DB POC. then this is why I used: july = LOAD 'POC.july' USING org.apache.hive.hcatalog.pig.HCatLoader; Then the data coming up from Hive already have a format and the relation in Pig will match the same schema. the problem is that even after setting a schema for the output I'm not able to store this outcome in a Hive table 😞 . so to get my real scenario you should: 1. Load the CSV file in HDFS without headers (I delete them before to avoid filters) run: tail -n +2 OD_XXX.csv >> july.csv 2. Create the table and load the file: Hive: create table july ( start_date string, start_station int, end_date string, end_station int, duration int, member_s int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; LOAD DATA INPATH '/user/andresangel/datasets/july.CSV'
OVERWRITE INTO TABLE july;
3. Follow my script posted up to the end to try to store the final outcome on a hive table 🙂 thanks buddy @Dinesh Chitlangia
... View more
07-05-2017
06:53 PM
2 Kudos
It requires spark-streaming jar to be added. Download the jar from here : https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-twitter_2.10/1.0.0 Open spark shell using: spark-shell --jars /path/to/jarFile
... View more