Member since
05-29-2017
408
Posts
121
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
802 | 09-01-2017 06:26 AM | |
426 | 05-04-2017 07:09 AM | |
314 | 09-12-2016 05:58 PM | |
362 | 07-22-2016 05:22 AM | |
305 | 07-21-2016 07:50 AM |
02-17-2016
07:22 AM
@Neeraj Sabharwal: Thanks a lot for your testing. I see you have tested it for a unix user(neeraj) who is part of a unix group(hdpadmin).Which is working fine for me. But my requirement is we have some users where they don't connect to server,they directly use some tools (like Aqua Data Studio or SQL client or Teradata client) and we validate them by login to our cluster by their LDAP(active directory) with jdbc string or though beeline. And when they submit their job then they have to set property mapred.job.queue.name and run their jobs. So my point is can we configure CS view for ldap or AD groups as well ? I tried it for groups but getting below error. But when I tried for user specific then it is working(as expected) g:adhdpadm:batch ERROR : Failed to execute tez graph. org.apache.tez.dag.api.TezException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1455533826426_0025 to YARN : Failed to submit application application_1455533826426_0025 submitted by user saurkuma reason: No groups found for user saurkuma u:saurkuma:batch: working
... View more
02-15-2016
04:40 PM
Hi @Neeraj Sabharwal: when configured ad group mapping then I don't defined any user mapping.And yes user saurkuma is a part of adhdpadm group. My first point is does it supports ldap groups(Active directory) or not ? I checked with local unix groups and found them working.
... View more
02-15-2016
02:27 PM
1 Kudo
@Neeraj Sabharwal: Yes I am a part of the AD group(adhdpadm) and when I configure u:saurkuma:default then it is working fine but when I do g:adhdpadm:default then it is failing with above error.
... View more
02-15-2016
01:59 PM
1 Kudo
Hello @Neeraj Sabharwal: Thanks for the above explanation. I have configured CS view which is working fine for local unix group and users. But when I configured for Ldap or AD group it does not work and fail with below error. org.apache.tez.dag.api.TezException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1455533826426_0018 to YARN : Failed to submit application application_1455533826426_0018 submitted by user saurkuma reason: No groups found for user saurkuma
... View more
02-13-2016
02:53 PM
1 Kudo
@Zaher Mahdhi: Also you can refer below article. http://www.hadoopadmin.co.in/planning-of-hadoop-cluster/
... View more
02-13-2016
02:45 PM
1 Kudo
@Tech Guy, Can you restart hdfs service once again and wait for sometime until namenode comeout from safe mode. Some time it stay back in safe mode because of DN is down or hdfs space is full or many under replicated/corrupted files. so also run hadoop dfsadmin -report command to get above info. Also as Neeraj suggested you can leave safe mode forcefully by running below command. hdfs dfsadmin -safemode leave.
... View more
02-13-2016
02:34 PM
2 Kudos
If a pool's minimum share is not met for some period of time, the scheduler optionally supports preemption of jobs in other pools. The pool will be allowed to kill tasks from other pools to make room to run. Preemption can be used to guarantee that "production" jobs are not starved while also allowing the Hadoop cluster to also be used for experimental and research jobs. In addition, a pool can also be allowed to preempt tasks if it is below half of its fair share for a configurable timeout (generally set larger than the minimum share preemption timeout). When choosing tasks to kill, the fair scheduler picks the most-recently-launched tasks from over-allocated jobs, to minimize wasted computation. Preemption does not cause the preempted jobs to fail, because Hadoop jobs tolerate losing tasks; it only makes them take longer to finish. for more details on configuration and understanding on preemption please go through below link http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_yarn_resource_mgt/content/preemption.html
... View more
02-12-2016
01:56 PM
1 Kudo
Can you check mysql connectivity and past logs here. Can you also paste output of the following command. SHOW GRANTS FOR 'hive'; And as Neeraj suggested please grant all privileges to hive user on host like below commend. GRANT ALL PRIVILEGES ON *.* TO 'hive'@'<hive_host>' IDENTIFIED BY PASSWORD '<password>'
... View more
02-12-2016
01:50 PM
1 Kudo
HI, I have a requirement where we have to assign our CS queues based on ad groups. For example our ad users are using cluster and running jobs under defined queue but I want that is there any way to configure their AD group with queue so that each member of that queue will go only to a specific queue.
... View more
02-12-2016
01:25 PM
2 Kudos
Is there a good way to convert curl api json output to csv ? Actually when I do curl -X GET "http://lxhdpmasttst001.lowes.com:8088/ws/v1/cluster/scheduler" it gives output to json and I want to load it to hive or want to convert into csv so that everyone can read it easily.
... View more
02-12-2016
01:05 PM
1 Kudo
@rajdip chaudhuri Can you please paste /var/log/hadoop/hdfs/<latest_NN>.log and /var/log/ambari-server/ambari-server.log files content. Also please check whether hdfs and ambari service is running.
... View more
02-11-2016
09:36 PM
1 Kudo
When you upgrade your stack from 2.2 to 2.3 then you will have issue using hue or hive due to a bug. In hive config file you may have wrong value in templeton.libjars property = /usr/hdp/${hdp.version}/zookeeper,/usr/hdp/${hdp.version}/hive/lib/hive-common.jar/zookeeper.jar So change it to correct one. Actually this is a bug which occurs with upgrade from HDP 2.2 to 2.3.
correct value is below /usr/hdp/${hdp.version}/zookeeper/zookeeper.jar,/usr/hdp/${hdp.version}/hive/lib/hive-common.jar
... View more
- Find more articles tagged with:
- Hadoop Core
02-11-2016
09:20 AM
1 Kudo
@Artem Ervits Sorry if I could not explain my question. Actually I don't have any issue by creating widgets, but when I export into csv for 1 hr then it would be on 1 min interval but as I go for one day or week or year then this interval increase to 1 hr. So my requirement is to have 30 sec interval only for 1 month or one year also. I tried to follow below URL but could not get clear idea . https://cwiki.apache.org/confluence/display/AMBARI/Configuration
... View more
02-10-2016
01:44 PM
@Rahul Pathak I have ambari Version2.2.0.0 and hdp stack 2.3.4.0. I tried for other metrics for Resourcemanager but getting some weird error. Problem!
Unable to compile query predicate: Invalid Query Token: token='(', previous token type=VALUE_OPERAND.
... View more
02-10-2016
01:04 PM
HI @Rahul Pathak I am selecting from drop down only as below . but it is giving me same error.
... View more
02-10-2016
12:51 PM
@Rahul Pathak @Neeraj Sabharwal : I am trying to configure grafan with ambari and trying to create graph but it is throwing me an error. So can you please point me to solution. Problem!
The requested resource doesn't exist: ServiceComponentHost not found, clusterName=HDPTST, serviceName=AMBARI_METRICS, serviceComponentName=METRICS_COLLECTOR, hostName=test.com
... View more
02-10-2016
11:34 AM
3 Kudos
Hello Everyone, In ambari widgets we get data for every 1 minute if we extract for a 1
hr but if we go for one day or more then this time interval keep on
increasing. But my requirement is to get a widget for Capacity queue utilization every 30 sec. So can you please help me to get it done. Thanks in advance.
... View more
Labels:
02-10-2016
09:34 AM
Hello Hadi, It seems your ambari-agent is not running. So please restart ambari agent on all the nodes. $ ambar-agent start or $ ambari-agent restart
... View more
02-10-2016
09:29 AM
1 Kudo
Hi Vinod, You can use following steps to apply patch on your server. 1. Download patch to local or server. 2. You can use below command to apply it under ambari home dir. $ patch -p0 -i /path/to/AMBARI-14466.patch
... View more
02-10-2016
09:11 AM
1 Kudo
Hello Sasikumar, Please use username :azure and password : azure.
... View more
02-09-2016
12:44 PM
1 Kudo
@Rupinder Singh Some time by pressing ctrl+c does not kill jobs, so it always better to know job status via command and if it is running then kill by command. yarn application -status jobid yarn application -kill jobid (if running). Also do you have hue installed in your env ? if yes can you try with hue as well. Also put your log of yarn jobs.
... View more
02-08-2016
05:28 PM
@Artem Ervits @Neeraj Sabharwal @Rahul Pathak : Thanks for all your help and now I am able to create widgets for Queue utilization, but still have one question. When I export data for one day then it gives me for every minutes metrics but when I do for one month then it goes to 1 hr interval and even for one year it goes to 1 day interval. So my question is can we export data every 1 minute interval for one year or one month.
... View more
02-08-2016
01:26 PM
Hi @kang hua Can you restart hive metastore and HS2 and then do the following. Is your View server is separate and Hiveserver is separate ? If yes can you please check whether you are able to make connection to HS2 from View server. Also run service check through ambari by clicking on hive service and then action. Also goto to some server CLI where you can run some hive query manually just to cross check. If still error persist then paste logs here .
... View more
02-08-2016
01:18 PM
So what could be the reason of failure @peeyush. any idea.
... View more
02-08-2016
10:18 AM
Thanks @peeyush I cross checked and found the below : What I see is there was no value passed for this variable and for other variable as well. But my point is how it is getting success for next times. Does falcon create separate coordinator for each job and every-time ?
... View more
02-08-2016
09:31 AM
@peeyush Just forgot to mentioned that after some failure if I resubmit process and feed entities it is running successfully. But getting failed many times with no arguments value in configuration.
... View more
02-08-2016
07:17 AM
@peeyush I ma using below falcon version(0.6.0.2.2.0.0-2041). And there are some confidential info in entity so unfortunately I can not share that one. But I can tell you that if I delete feed and process entity and then resubmit again that time it is working.And note that I am not a tall using this parameter in my entity as well as job.properties in oozie. So I want to know from where falcon takes this value
... View more
02-08-2016
06:30 AM
2 Kudos
We are running hive jobs with oozie and falcon, it was running fine but all of sudden it started failing with below error. EL_ERROR: variable [feedInstancePaths] cannot be resolved,
Also we are not declaring or assigning this variable anywhere in our code.
sitecatalyst-kpis-generation-daily-process]
JOB[0011048-151208005457707-oozie-oozi-W]
ACTION[0011048-151208005457707-oozie-oozi-W@succeeded-post-processing]
ELException in ActionStartXCommand
javax.servlet.jsp.el.ELException: variable [feedInstancePaths] cannot be resolved
at org.apache.oozie.util.ELEvaluator$Context.resolveVariable(ELEvaluator.java:106)
... View more
Labels:
02-08-2016
06:24 AM
1 Kudo
Divakar: Thanks for your consideration, but as I have mentioned above problem has been resolved by running
build/env/bin/hue syncdb --noinput
... View more
02-08-2016
05:42 AM
1 Kudo
Yes,you are right @artem Evits we should use views but for that we should have a dedicated server which would manage views,otherwise it may create overload on our ambari servers and we do some changes and restart ambari then users also may get impacted.
... View more
- « Previous
- Next »