Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2528 | 04-27-2020 03:48 AM | |
| 4993 | 04-26-2020 06:18 PM | |
| 4089 | 04-26-2020 06:05 PM | |
| 3299 | 04-13-2020 08:53 PM | |
| 5039 | 03-31-2020 02:10 AM |
03-25-2017
01:42 PM
@Suhel Please remove the duplicate *.repo files from the "/etc/yum/repos.d/" directory. Yum gets confused because of the files "ambari_bkp.repo" , "ambari.repo", "hdp_bkp.repo", "hdp.repo", "HDP.repo".
Ideally you should have only one file like "ambari.repo", "HDP.repo" and "HDP-UTILS.repo" (apart from the OS related repos) Thats why you see the following message in the "/var/lib/ambari-agent/data/errors-107.txt" file. Repository Updates-ambari-2.1.0 is listed more than once in the configuration
Repository HDP-UTILS-1.1.0.20 is listed more than once in the configuration
Repository HDP-2.3.0.0 is listed more than once in the configuration Repository . Please remove the duplicate .repo files from the /etc/yum.repos.d and then perform a yum clean # yum clean all .
... View more
03-25-2017
11:40 AM
@Francisco Pires The purpose of the Reducer is to aggregate the input values and return a single output value. By default sqoop job is a map only job. It does not utilize the reducer by default, unless .... mentioned in the following link: https://cwiki.apache.org/confluence/display/SQOOP/Sqoop+MR+Execution+Engine#SqoopMRExecutionEngine-ComponentsofSqoopusingMR Another old blog with some example to explain the similar scenario: https://dataandstats.wordpress.com/2014/12/04/apache-sqoop-only-mappers-with-no-reducers/
... View more
03-24-2017
03:58 PM
1 Kudo
@Uvaraj Seerangan As you are getting the error as : "trace": "{\"exception\":\"NotFoundException\",\"message\":\"java.lang.Exception:
Timeline entity { id: tez_application_1490359925913_0002, type: TEZ_APPLICATION } is not found\",\"javaClassName\":\"org.apache.hadoop.yarn.webapp.NotFoundException\"}", "message": "Failed to fetch results by the proxy from url: http://localhost:8188/ws/v1/timeline/TEZ_APPLICATION/tez_application_1490359925913_0002?_=1490365592944&user.name=admin",
So please first check if your "tez-site.xml" file has the following property set properly ? tez.am.view-acls=*
Also check if disabling the ACL at Yarn config works for you? Because in your error we see "user.name=admin" so either add the admin user to the yarn acl or disable the acl for the moment and then try again. yarn.admin.acl = yarn,dr.who,admin
yarn.acl.enable=false .
... View more
03-24-2017
09:56 AM
@Vladislav Falfushinsky Ambari Blueprints are a declarative definition of a cluster. It does not contain any ambari DB users/group related information's. With a Blueprint, you specify a stack the Component layout and the Configurations to materialize a Hadoop cluster instance (via a REST API) without having to use the Ambari Cluster Install Wizard. https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Introduction - "ambari-server setup" also does not have any feature to create users/groups. But if you have LDAP / Active Directory configured then you can sync users/groups using ldap-sync option. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.0/bk_ambari-security/content/synchronizing_ldap_users_and_groups.html .
... View more
03-24-2017
09:42 AM
@Vladislav Falfushinsky Do you want to use Ambari APIs to manage user roles/groups/users. https://community.hortonworks.com/content/supportkb/49416/managing-ambari-users-and-groups-with-the-rest-api.html
... View more
03-24-2017
05:35 AM
1 Kudo
While accessing Ambari 2.5 or 2.4 Hive Views (1.5.0/2.0) we see the following error in the log: ERROR [ambari-client-thread-62] ContainerResponse:537 - Mapped exception to response: 500 (Internal Server Error) org.apache.ambari.view.hive2.utils.ServiceFormattedException
at org.apache.ambari.view.hive2.client.NonPersistentCursor.getNextRows(NonPersistentCursor.java:132)
at org.apache.ambari.view.hive2.client.NonPersistentCursor.fetchIfNeeded(NonPersistentCursor.java:119)
at org.apache.ambari.view.hive2.client.NonPersistentCursor.getDescriptions(NonPersistentCursor.java:84)
at org.apache.ambari.view.hive2.resources.jobs.ResultsPaginationController.request(ResultsPaginationController.java:145)
at org.apache.ambari.view.hive2.resources.jobs.JobService.getResults(JobService.java:361)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Also we notice the following kind of error "Result fetch timed out" in the Ambari UI Hive 1.5 view while running some hive queries that are fetching results from a large table. "trace":"java.util.concurrent.TimeoutException: deadline passed
java.util.concurrent.TimeoutException: deadline passed
at akka.actor.dsl.Inbox$InboxActor$$anonfun$receive$1.applyOrElse(Inbox.scala:117)
at scala.PartialFunction$AndThen.applyOrElse(PartialFunction.scala:189)
at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
at akka.actor.dsl.Inbox$InboxActor.aroundReceive(Inbox.scala:62)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)","message":"Result fetch timed out","status":500} This is basically a timeout error "Result fetch timed out" which indicates that HiveView is having default timeout value set in the ambari.properties that is not suitable for the kind of query we are running. As it is not able to fetch the result from hive within that mentioned time period. Set the following properties in "/etc/ambari-server/conf/ambari.properties" views.ambari.request.read.timeout.millis=300000
views.request.read.timeout.millis=300000 Here we are setting the value to 5 minutes. It can be increased / decreased further based on the observations and the time taken to process the long running queries. . From Ambari 2.4 this new parameter is added that can be used to define the Hive Instance specific settings. views.ambari.hive.<HIVE_VIEW_INSTANCE_NAME>.result.fetch.timeout=300000 Example: views.ambari.hive.AUTO_HIVE_INSTANCE.result.fetch.timeout=300000 Here above is the most important ambari.properties for the Hive View 1.5/2.0. https://github.com/apache/ambari/blob/release-2.4.0/contrib/views/hive-next/src/main/java/org/apache/ambari/view/hive2/utils/HiveActorConfiguration.java#L32 **NOTE:** After making the changes inside the "ambari.properties" ambari-server needs to be restarted. . .
... View more
Labels:
03-24-2017
03:32 AM
@Elvis Zhang
The table output is so jumbled so i am not able to understand it clearly. But it is easy to take a table backup. like sql> CREATE TABLE clusterconfigmapping_OLD as SELECT * FROM clusterconfigmapping;
sql> Select * from clusterconfigmapping_OLD; . If you see that te table back up is successful then try updating the table as per your previous command (make sure to commit the changes) and then restart ambari server to see how it goes. As you already have the table backup.
... View more
03-24-2017
01:59 AM
2 Kudos
@Elvis Zhang Your issue looks similar to: https://issues.apache.org/jira/browse/AMBARI-16379 Can you please run the following SQL command on your ambari database to se eof the "selected=1" or not for the krb5-conf sql> select * from clusterconfig WHERE type_name in ('kerberos-env', 'krb5-conf');
sql> select * from clusterconfigmapping WHERE type_name in ('kerberos-env', 'krb5-conf');
sql> select ccm.type_name, ccm.version_tag, ccm.selected, cc.version_tag from clusterconfigmapping ccm left join clusterconfig cc on ccm.version_tag = cc.version_tag where ccm.selected = 1 and cc.version_tag is NULL;
. You might need to run the following SQL queries in the ambari DB in order to see if it fixes your issue: 1. Find the latest version tag for "krb5-conf" using select query and then run the following command. Suppose if you found that the latest version tag for krb5-conf was "version1490320959000" (sothat you choose the latest version) in the DB then try setting it selected = 0 / 1 accordingly.
2. Restart ambari -server. .
... View more
03-24-2017
01:09 AM
@Ye Jun
Good to know that the issue is resolved after changing the driver. It will be great if you can mark this HCC thread "Accepted" so that it will be useful for other users as well.
... View more
03-23-2017
04:37 PM
@Kent Brodie
Great to hear that your issue is resolved. It will be wonderful if you can mark the answer of this thread as "Accepted" so that it will be useful for community.
... View more