Member since
06-07-2016
81
Posts
3
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1429 | 02-21-2018 07:54 AM | |
3679 | 02-21-2018 07:52 AM | |
4690 | 02-14-2018 09:30 AM | |
1994 | 10-13-2016 04:18 AM | |
12318 | 10-11-2016 08:26 AM |
09-28-2016
03:38 AM
@Sandeep Nemuri My requirement is below. I want to omit keys for Role based Authentication. Now AWS instance is assigned with the role but my hadoop distcp is not working if I provide the command without keys. <property>
<name>fs.s3a.access.key</name>
<description>AWS access key ID. Omit for Role-based authentication.</description>
</property>
<property>
<name>fs.s3a.secret.key</name>
<description>AWS secret key. Omit for Role-based authentication.</description>
</property>
... View more
09-27-2016
06:42 AM
Dear All, We have been using hadoop distcp to backup hdfs data to AWS S3 via script in crontab & we have been using AWS keys with the distcp command to do the backup. Without AWS keys also it works but some times we are getting the timeout error and not reliable. Is it mandatory to use the AWS keys along with hadoop distcp command or not? If not why i was getting the timeout/socket errors when i run without AWS keys? Manually tested few times and same result. Command: With Keys hadoop distcp -Dfs.s3a.server-side-encryption-algorithm=AES256 -Dfs.s3a.access.key=${AWS_ACCESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY} -update hdfs://< HDFS dir>/ s3a://${BUCKET_NAME}/ Without Keys hadoop distcp -Dfs.s3a.server-side-encryption-algorithm=AES256 -update hdfs://< HDFS
dir>/ s3a://${BUCKET_NAME}/ Below is the error we get while running with out AWS keys. ""dfs.sh_20160630_010001:com.amazonaws.AmazonClientException: Unable to upload part: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 3C1FD2E8F503F052, AWS Error Code: RequestTimeout, AWS Error Message: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed."
... View more
Labels:
- Labels:
-
Apache Hadoop
08-05-2016
07:31 AM
@sbhat It worked, awesome. Thanks a lot.
... View more
08-05-2016
06:36 AM
@sganatra curl -u admin:xxxx -i -H “X-Requested-By: ambari” -X DELETE http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE
curl: (6) Could not resolve host: xn--ambari-1i0c; Name or service not known
HTTP/1.1 400 Bad Request
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=huz7jn29p3fggsw2xggfqto;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 107
Server: Jetty(8.1.17.v20150415)
{
"status" : 400,
"message" : "CSRF protection is turned on. X-Requested-By HTTP header is required."
... View more
08-05-2016
03:54 AM
HI I tried installing the services HIVE, TEZ, PIG to the existing cluster and it got failed. I wanted to remove / clean up the components and services. In the Since the installation is failed, i could not find the component to decommission and then remove. Only HCAT got installed. Tried through API to remove the service, it is getting failed with the below error. CSRF option is not in ambari.properties file as well. $ cat /etc/ambari-server/conf/ambari.properties | grep -i CSR
[root@ip-172-27-3-42.ap-southeast-1.compute.internal]:/home/ambari
$ $ curl -u admin:xxxx -H “X-Requested-By: ambari” -X GET http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE
curl: (6) Could not resolve host: xn--ambari-1i0c; Name or service not known
{
"href" : "http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE",
"ServiceInfo" : {
"cluster_name" : "eim_edl_dev_cluster_1",
"maintenance_state" : "OFF",
"service_name" : "HIVE",
"state" : "INSTALL_FAILED"
},
"alerts_summary" : {
"CRITICAL" : 0,
"MAINTENANCE" : 0,
"OK" : 0,
"UNKNOWN" : 0,
"WARNING" : 0
},
"alerts" : [ ],
"components" : [
{
"href" : "http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE/components/HCAT",
"ServiceComponentInfo" : {
"cluster_name" : "eim_edl_dev_cluster_1",
"component_name" : "HCAT",
"service_name" : "HIVE"
}
},
{
"href" : "http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE/components/HIVE_CLIENT",
"ServiceComponentInfo" : {
"cluster_name" : "eim_edl_dev_cluster_1",
"component_name" : "HIVE_CLIENT",
"service_name" : "HIVE"
}
},
{
"href" : "http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE/components/HIVE_METASTORE",
"ServiceComponentInfo" : {
"cluster_name" : "eim_edl_dev_cluster_1",
"component_name" : "HIVE_METASTORE",
"service_name" : "HIVE"
}
},
{
"href" : "http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE/components/HIVE_SERVER",
"ServiceComponentInfo" : {
"cluster_name" : "eim_edl_dev_cluster_1",
"component_name" : "HIVE_SERVER",
"service_name" : "HIVE"
}
},
{
"href" : "http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE/components/MYSQL_SERVER",
"ServiceComponentInfo" : {
"cluster_name" : "eim_edl_dev_cluster_1",
"component_name" : "MYSQL_SERVER",
"service_name" : "HIVE"
}
},
{
"href" : "http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE/components/WEBHCAT_SERVER",
"ServiceComponentInfo" : {
"cluster_name" : "eim_edl_dev_cluster_1",
"component_name" : "WEBHCAT_SERVER",
"service_name" : "HIVE"
}
}
],
"artifacts" : [ ]
======================================
$ curl -u admin:xxxxx -H “X-Requested-By: ambari” -X DELETE http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE
curl: (6) Could not resolve host: xn--ambari-1i0c; Name or service not known
{
"status" : 400,
"message" : "CSRF protection is turned on. X-Requested-By HTTP header is required."
$ curl -u admin:xxxxx -H “X-Requested-By: ambari” -X DELETE -d ‘{“RequestInfo”:{“state”:”INSTALL_FAILED”}}’ http://172.27.3.42:8080/api/v1/clusters/eim_edl_dev_cluster_1/services/HIVE
curl: (6) Could not resolve host: xn--ambari-1i0c; Name or service not known
{
"status" : 400,
"message" : "CSRF protection is turned on. X-Requested-By HTTP header is required.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
-
Apache Pig
-
Apache Tez
07-28-2016
09:30 AM
@sbhat Thank you will try and let you know, once i bring down active name node service, standby which is active will become active namenode. After that if I start the name node service on the one where is stopped will be become standby? Also do I need to do any manual steps like making active name node is safe mode and run savenamespace and then stop the service? or putting into maintenance mode the active name node will take care of this?
... View more
07-28-2016
09:19 AM
HI How to do a cluster failover from active namenode to standby namenode from Ambari console? Could someone please help with the exact steps to be done?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
07-21-2016
09:13 AM
1 Kudo
@Mateusz : In that case you may need to write a small shell script to create user or group or add user to a group since we will have ssh keys enabled. Else you may need to try expect command which will pass the password too. Hope it clarifies. If you are satisfied pls rate.
... View more
07-13-2016
07:39 AM
@sankar rao Go through the below link which got the parameter with example for the same. Kindly vote if its relevant. http://hortonworks.com/blog/hdfs-acls-fine-grained-permissions-hdfs-files-hadoop/
... View more
06-21-2016
05:00 AM
Hi All, We have been facing issues while Abinitio jobs being run. Here is some outline about the setup. In our end, Hadoop is used only for storage, remaining part is taken care by Abinitio (they ve product for data ingestion and retrieval). When they try to ingest the data (all write operation) & ran some query to read some tables we have been hit with the below error. I have checked and changed the ulimit ( -n ) value for abiuser which they have been using to run the job for data ingestion / read from 1024 to 20,000 on all data nodes and name node. Also verified hdfs limits on all data nodes and it is like below. HDP version is 2.4. There are no errors on the Ambari console as well. hdfs - nofile 128000
hdfs - nproc 65536
... View more
Labels:
- « Previous
- Next »