Member since
01-19-2017
3681
Posts
633
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1600 | 06-04-2025 11:36 PM | |
| 2068 | 03-23-2025 05:23 AM | |
| 980 | 03-17-2025 10:18 AM | |
| 3730 | 03-05-2025 01:34 PM | |
| 2565 | 03-03-2025 01:09 PM |
04-29-2019
07:54 PM
1 Kudo
@duong tuan anh Great to know it helped! If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors.
... View more
04-29-2019
06:29 AM
@Erkan ŞİRİN Can you try using sqoop import --connect jdbc:mysql://sandbox-hdp.hortonworks.com/azhadoop --driver com.mysql.jdbc.Driver --username root --password hadoop
... View more
04-28-2019
09:12 PM
You can use the below hdfs command $ hdfs dfs -cat hive_table_data_folder/p* > new_file_name Let me know whether that helped
... View more
04-28-2019
06:11 PM
@Michael Bronson If your earlier API command was like below curl -u admin:admin -i -H 'X-Requested-By: ambari' -X PUT -d '{"HostRoles": {"state": "INSTALLED"}}' http://node2:8080/api/v1/clusters/HDP/hosts/node1/host_components/SPARK2_THRIFTSERVER
HTTP/1.1 202 Accepted
.....
...
{
"href" : "http://node2:8080/api/v1/clusters/HDP/requests/174",
"Requests" : {
"id" : 174,
"status" : "Accepted"
} Then you should take note of the Request id i.e 174 in the above curl -u admin:admin -i -H 'X-Requested-By: ambari' -X GET http://node2:8080/api/v1/clusters/HDP/requests/{174} curl -u admin:admin -i -H 'X-Requested-By: ambari' -X GET http://node2:8080/api/v1/clusters/HDP/requests/{174}
HTTP/1.1 200 OK
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=140pqxfdj6o06egemwmnallrt;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 2301
output .....
...
"cluster_name" : "HDP",
"completed_task_count" : 1,
"create_time" : 1556457466529,
"end_time" : 1556457510975,
"exclusive" : false,
"failed_task_count" : 0,
"id" : 174,
"inputs" : null,
"operation_level" : null,
"progress_percent" : 100.0,
"queued_task_count" : 0,
"request_context" : "",
"request_schedule" : null,
"request_status" : "COMPLETED",
"resource_filters" : [ ],
"start_time" : 1556457466884,
"task_count" : 1,
"timed_out_task_count" : 0,
"type" : "INTERNAL_REQUEST"
},
... View more
04-26-2019
01:08 PM
2 Kudos
@duong tuan anh You have a permission issue please do the following steps as the root user # su - hdfs
$ hdfs dfs -chown -R hbase:hbase /apps/hbase/data/ That should resolve your problem HTH
... View more
04-25-2019
07:13 PM
@Yegane Ahmadnejad 1. Don't manually set the java_home if you are on RHEL/Centos tar xzf jdk-8u171-linux-x64.tar.gz
cd /opt/jdk1.8.0_171/
alternatives --install /usr/bin/java java /opt/jdk1.8.0_171/bin/java 2
alternatives --config java
alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_171/bin/jar 2
alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_171/bin/javac 2
alternatives --set jar /opt/jdk1.8.0_171/bin/jar
alternatives --set javac /opt/jdk1.8.0_171/bin/javac 2. Don't change the warehouse root directory 3. Don't create the hiveserver2 znode manually. 4. Didn't see the hive database setup step??
... View more
04-25-2019
04:38 AM
@Madhura Mhatre Surely you could just t^do that but what happens to the replicas stored on that particular data node? Somehow your cluster has to reconstruct those replicas if you had a replicator factor of more than 1. Here I was talking about planned maintenance! Just switching it off will also force your cluster to do the same thing in the background with alerts and ONLY when the replicas have been reconstructed will those alerts go away. There is a performance cost for both decommissioning and just unplugging the data node.
... View more
04-24-2019
04:40 PM
@Madhura Mhatre It's well documented in by hortonworks Once you launch the decommissioning the blocks on that node will be distributed to the other node remaining nodes If the replication factor is higher than the number of existing data nodes after the removal, the removal process is not going to succeed!
... View more
04-23-2019
02:58 PM
@Shilpa Gokul If you found this answer addressed your question, please take a moment to log in and click the "Accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors.
... View more
04-23-2019
12:28 PM
@Dennis Suhari If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors.
... View more