Member since
09-30-2015
83
Posts
57
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7819 | 02-26-2016 11:19 PM |
05-12-2016
04:46 PM
2 Kudos
Do we have a way to use Cloudbreak for installing HDP cluster on bare metal servers? I know Cloudbreak works with AWS/GCP/Azure and OpenStack but I would like to know if Cloudbreak can spin-off clusters either by using Docker or directly on on-premise physical nodes with out OpenStack managing the hardware . please advise
... View more
Labels:
- Labels:
-
Docker
-
Hortonworks Cloudbreak
04-20-2016
06:17 PM
@Matt FoleyThanks for additional information. This is very helpful
... View more
04-18-2016
07:57 PM
@Divakar Annapureddy Does this mean that we can't restore the cluster with the meta-data/data backups we take before the cluster re-install?
... View more
04-18-2016
07:15 PM
What is the best approach to restore HDP cluster if a customer would like to migrate from SUSE OS on an existing HDP cluster to RHEL OS? Is it same as re-installing OS/HDP cluster and restoring it with the backup data/config? please advise.
... View more
04-15-2016
09:41 PM
@jramakrishnan This looks more like authentication solution than securing the data channels to eliminate the risk of having your data transfer in clear text. Am i missing anything here?
... View more
04-14-2016
09:34 PM
Is there any way to secure the data transfer between the source and datanodes while using Sqoop? I know we can use Kerberos for authentication but not sure if we have any way to secure the data transfer. Please advise.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Sqoop
04-05-2016
04:58 PM
@Kuldeep Kulkarni Thanks for the update.This issue was resolved by Hortonworks Support Engineering. I am trying to catching up with what they have done to fix the issue.
... View more
03-25-2016
05:32 PM
1 Kudo
We enabled Yarn Resource Manager HA on our cluster ( HDP 2.3.2 and Ambari 2.1.2.1) and it was working fine until we re-installedRanger KMS server from the cluster. When the ResrouceManager HA was working, I saw one of them as active Resource Manager and the other one as Stand-by but they both are now showing as ResourceManager and also when I run the service check on yarn .. it is failing with the following error message Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 142, in <module>
ServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 216, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 138, in service_check
raise Exception("Could not get json response from YARN API")
Exception: Could not get json response from YARN API
... View more
Labels:
03-01-2016
03:06 AM
Thats interesting .. can someone confirm that we can't use cp or mv on encrypted zone files?
... View more
02-29-2016
11:35 PM
3 Kudos
I am new to TDE and one of our customers would like to know the following: What happens when an encrypted file is moved from encrypted zone to another location on HDFS? can we still decrypt and re-encrypt that file using the same key? or we can't decrypt that file once it is moved from its encrypted zone location.
... View more
Labels:
02-26-2016
11:19 PM
1 Kudo
jobs are running fine after i added the user to hadoop group on all the nodes .. but i am not sure adding the user account to the hadoop group would be a good idea ..
... View more
02-26-2016
07:52 PM
1 Kudo
yes. user not found issue is gone after i created the user on all the nodes. Do you know where I can look for which classpath/jars that has permissions issue?
... View more
02-26-2016
07:06 PM
@Vikas Gadade I created the user on all the nodes but the job is still failing with the following output xxxxx:/#yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 10000 /user/xxxxx/teraout8 16/02/26 10:52:18 INFO impl.TimelineClientImpl: Timeline service address: http://timelineuri:8188/ws/v1/timeline/ 16/02/26 10:52:18 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 37 for rbalam on ha-hdfs:testnnhasvc 16/02/26 10:52:19 INFO security.TokenCache: Got dt for hdfs://testnnhasvc; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:testnnhasvc, Ident: (HDFS_DELEGATION_TOKEN token 37 for rbalam) 16/02/26 10:52:19 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 16/02/26 10:52:20 INFO terasort.TeraSort: Generating 10000 using 2 16/02/26 10:52:21 INFO mapreduce.JobSubmitter: number of splits:2 16/02/26 10:52:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456512672399_0001 16/02/26 10:52:22 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:testnnhasvc, Ident: (HDFS_DELEGATION_TOKEN token 37 for rbalam) 16/02/26 10:52:24 INFO impl.YarnClientImpl: Submitted application application_1456512672399_0001 16/02/26 10:52:24 INFO mapreduce.Job: The url to track the job: http://timelineuri:8188/ws/v1/timeline/ 16/02/26 10:52:24 INFO mapreduce.Job: Running job: job_1456512672399_0001 16/02/26 10:52:29 INFO mapreduce.Job: Job job_1456512672399_0001 running in uber mode : false 16/02/26 10:52:29 INFO mapreduce.Job: map 0% reduce 0% 16/02/26 10:52:29 INFO mapreduce.Job: Job job_1456512672399_0001 failed with state FAILED due to: Application application_1456512672399_0001 failed 2 times due to AM Container for appattempt_1456512672399_0001_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://timlineserveruri:8088/cluster/app/application_1456512672399_0001Then, click on links to logs of each attempt. Diagnostics: Application application_1456512672399_0001 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is xxxxx main : requested yarn user is xxxxx Failing this attempt. Failing the application. 16/02/26 10:52:29 INFO mapreduce.Job: Counters: 0
... View more
02-26-2016
11:58 AM
1 Kudo
Could you please confirm this again? if i need to have users on all the nodes in the cluster to run jobs successfully.. i could end up with quite a few users on all the nodes which may become a maintenance head-ache down the line ..
... View more
02-26-2016
11:46 AM
1 Kudo
@Neeraj Sabharwal I ran the job again and tried to get yarn logs .... here is what i see xxxxx:~#yarn logs -applicationId application_1456457210711_0002
16/02/26 03:44:26 INFO impl.TimelineClientImpl: Timeline service address: http://yarntimelineserveraddress:8188/ws/v1/timeline/
/app-logs/xxxxx/logs/application_1456457210711_0002 does not have any log files. Here is what I see on the ResourceManager UI Application application_1456457210711_0002 failed 2 times due to AM Container for appattempt_1456457210711_0002_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://resourcemanageruri:8088/cluster/app/application_1456457210711_0002Then, click on links to logs of each attempt. Diagnostics: Application application_1456457210711_0002 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is xxxxx main : requested yarn user is xxxxx User xxxxx not found
Failing this attempt. Failing the application.
... View more
02-26-2016
02:51 AM
2 Kudos
I enabled kerberos on HDP 2.3.2 cluster using ambari 2.1.2.1 and then tried to run map reduce job on the edge node as a local user but the job failed: Error Message: Diagnostics: Application application_1456454501315_0001 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is xxxxx main : requested yarn user is xxxxx User xxxxx not found
Failing this attempt. Failing the application.
16/02/25 18:42:28 INFO mapreduce.Job: Counters: 0
Job Finished in 7.915 seconds My understanding is that we don't need the edge node local user anywhere else.. but I am not sure why my map reduce job is failing due to the user not being there on other nodes. please help example mapreduce job: XXXXX:~#yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.1.2.3.2.0-2950.jar pi 16 100000
... View more
Labels:
- Labels:
-
Apache Hadoop
02-16-2016
04:10 AM
3 Kudos
The agent which is running on ambari-server node is not able to create pid file but when we check the process, the agent process is running in the back ground.However when we check the status using ambari-agent status.. it is reporting as the agent NOT running. The reason is because there is no pid file created when we tried to start the ambari-agent. Could you please help us identify the reason why it is not create PID file? We have ambari 2.2.0 running on RH 6.6 and ambari-server and ambari-agent are running as root.
... View more
Labels:
- Labels:
-
Apache Ambari
02-11-2016
04:40 PM
1 Kudo
I followed the same steps on another cluster where we dont have hue https and kerberos and it is working as expected there. So i think there is a problem with either https and/or kerberos settings.
... View more
02-11-2016
04:12 PM
1 Kudo
Hi @Neeraj Sabharwal I followed the same steps and restarted httpfs and hue services but when I try to access hue filebrowser it is throwing exceptions. The only difference in this environment is hue is running on https and cluster is kerberized but not sure if it makes any difference. Can you please let me know how to trouble shoot this issue? WebHdfsException at /filebrowser/ StandbyException: Operation category READ is not supported in state standby (error 403)
Request Method: GET Request URL: https://falbdcdd0001v:8000/filebrowser/ Django Version: Exception Type: WebHdfsException Exception Value: StandbyException: Operation category READ is not supported in state standby (error 403)
Exception Location: /usr/lib/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py in _stats, line 205
Python Executable: /usr/bin/python2.6
Python Version:
... View more
02-10-2016
10:17 PM
1 Kudo
I have a follow up question on this. Lets say I removed all the users from Ranger which were synced from a local unix server and then re-configured to sync users from an AD domain/group. In this case, do II need to create "hive" user on that particular AD group before I can create a policy to let hive queries run as hive user instead of end users on the cluster? what about other service accounts like mapred, yarn etc .. do I need to create all those accounts on AD? please advise.
... View more
02-09-2016
06:33 PM
2 Kudos
I am using Firefox 44.0 and here is the trace from it.
TRACE: The url is: /api/v1/clusters/hwtest?fields=Clusters/health_report,Clusters/total_hosts,alerts_summary_hosts&minimal_response=true app.js:154162:7
Status code 200: Success. app.js:54173:3
App.componentsStateMapper execution time: timer started app.js:55315
App.componentsStateMapper execution time: 29.04ms app.js:55360
TRACE: The url is: /api/v1/clusters/hwtest/host_components?fields=HostRoles/component_name,HostRoles/host_name&minimal_response=true app.js:154162:7
Status code 200: Success. app.js:54173:3
TRACE: The url is: /api/v1/clusters/hwtest/configurations?type=cluster-env app.js:154162:7
Status code 200: Success. app.js:54173:3
App.componentConfigMapper execution time: timer started app.js:55076
App.componentConfigMapper execution time: 5.95ms app.js:55125
Config validation failed: Object { readyState: 4, setRequestHeader: .ajax/v.setRequestHeader(), getAllResponseHeaders: .ajax/v.getAllResponseHeaders(), getResponseHeader: .ajax/v.getResponseHeader(), overrideMimeType: .ajax/v.overrideMimeType(), abort: .ajax/v.abort(), done: f.Callbacks/p.add(), fail: f.Callbacks/p.add(), progress: f.Callbacks/p.add(), state: .Deferred/h.state(), 14 more… } error Internal Server Error Object { type: "POST", timeout: 180000, dataType: "json", statusCode: Object, headers: Object, url: "/api/v1/stacks/HDP/versions/2.3/val…", data: "{"hosts":["falbdcdq0001v.farmersinsurance.com","falbdcdq0002v.farmersinsurance.com","falbdcdq0003v.farmersinsurance.com","falbdcdq0004v.farmersinsurance.com","falbdcdq0005v.farmersinsurance.com","falbdcdq0006v.farmersinsurance.com","falbdcdq0007v.farmersinsurance.com"],"services":["AMBARI_METRICS","ATLAS","FALCON","FLUME","HBASE","HDFS","HIVE","KAFKA","KNOX","MAPREDUCE2","OOZIE","PIG","RANGER","RANGER_KMS","SPARK","SQOOP","STORM","TEZ","YARN","ZOOKEEPER"],"validate":"configurations","recommendations":{"blueprint":{"host_groups":[{"name":"host-group-1","components":[{"name":"FALCON_CLIENT"},{"name":"HBASE_CLIENT"},{"name":"HDFS_CLIENT"},{"name":"HCAT"},{"name":"HIVE_CLIENT"},{"name":"MAPREDUCE2_CLIENT"},{"name":"OOZIE_CLIENT"},{"name":"PIG"},{"name":"SPARK_CLIENT"},{"name":"SQOOP"},{"name":"TEZ_CLIENT"},{"name":"YARN_CLIENT"},{"name":"ZOOKEEPER_CLIENT"},{"name":"METRICS_MONITOR"},{"name":"FALCON_CLIENT"},{"name":"HBASE_CLIENT"},{"name":"HCAT"},{"name":"HDFS_CLIENT"},{"name":"HIVE_CLIENT"[…], context: Object, beforeSend: ajax<.send/opt.beforeSend(), success: ajax<.send/opt.success(), 2 more… } app.js:64175:5
Error code 500: Internal Error on server side. app.js:54195:3
Config validation failed. Going ahead with saving of configs app.js:64188:7
TRACE: The url is: /api/v1/clusters?fields=Clusters/provisioning_state app.js:154162:7
Status code 200: Success. app.js:54173:3
TRACE: The url is: /api/v1/clusters/hwtest/requests?to=end&page_size=10&fields=Requests app.js:154162:7
Status code 200: Success. app.js:54173:3
... View more
02-09-2016
06:11 PM
2 Kudos
I installed Ambari 2.1.2.1 and HDP 2.3.2 on RH Linux 6.6. Installation went fine and services are all up and running but when I try to make any config change on Ambari UI, I see the following message. The configuration changes could not be validated for consistency due to an unknown error. Your changes have not been saved yet. Would you like to proceed and save the changes? How to get rid of this error?
... View more
Labels:
- Labels:
-
Apache Ambari
02-08-2016
07:37 PM
1 Kudo
I dont any option to remove this hadoop.security.key.provider.path. please see the attached image
... View more
02-08-2016
07:21 PM
sorry if it is a dumb question but how do i remove those entries from Ambari? I dont see any option to delete.
... View more
02-08-2016
07:03 PM
Hi @Neeraj Sabharwal I checked those two entries on ambari both of them set to blank. There are no values for those two entries. What exactly you mean by disable them?
... View more
02-08-2016
06:42 PM
2 Kudos
I installed Ranger and Ranger KMS on a cluster. Ranger started fine but Ranger KMS is failing to start, so i would like to remove it for now and install it later. So I tried to remove it using the following command FALBDCDQ0001V:~#curl -u admin:xxxxxx -H "X-Requested-By: ambari" -X DELETE http://localhost:8080/api/v1/clusters/hwtest/services/RANGER_KMS but it failed to remove with the following exception
"status" : 500,
"message" : "org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Cannot remove hwtest/RANGER_KMS. RANGER_KMS_SERVER is in a non-removable state." I logged into the Ambari database to see if Ranger KMS is in a good state to remove. but the desired_state is showing as started eventhough it failed to start. So I updated the state using the following command and tried to remove the service again but i still got the same error as above. update servicedesiredstate set desired_state='INSTALLED' where service_name='RANGER_KMS'; Here is the error when I try to restart Ranger KMS .. 2016-02-05 17:12:37,755 - Error : unknown url type: falbdcdq0001v.farmersinsurance.com
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms_server.py", line 82, in <module>
KmsServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 216, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 484, in restart
self.start(env)
File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms_server.py", line 55, in start
enable_kms_plugin()
File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms.py", line 274, in enable_kms_plugin
raise Fail('Ranger service is not started on given host')
resource_management.core.exceptions.Fail: Ranger service is not started on given host
... View more
Labels:
02-05-2016
05:28 PM
1 Kudo
@Neeraj Sabharwal I modified oozie_url but there is no difference. I am thinking to re-install HUE on this box, so do you have any document which would help me install HUE manually? This cluster has no internet, so i need to download all the binaries before I can install manually.
... View more
02-04-2016
08:20 PM
@Neeraj Sabharwal I configured hue.ini as per the link i posted above, i checked it again but I could not see anything out of ordinary.. Is there a particular section/config I should be looking at? or is it okay if I upload the .ini file for your review?
... View more