Member since
12-11-2015
213
Posts
87
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3247 | 12-20-2016 03:27 PM | |
12904 | 07-26-2016 06:38 PM |
05-25-2016
02:31 AM
I am trying to use MySql instead of derby to be used for oozie. I installed MySql on oozie server host and also created a database. Now on Ambari during installation it says below. Ambari-server is already setup and I am trying to add services on multiple nodes now. Do I have to install MySql on Ambari-server also. This is not clear to me. Also if I use derby to start with, can I changed it to MySql later ? Be sure you have run:
ambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/mysql/mysql-connector-java.jar on the Ambari Server host to make the JDBC driver available and to enable testing the database connection.
... View more
Labels:
- Labels:
-
Apache Oozie
05-04-2016
07:55 PM
Thank you so much Benjamin. We are starting with small size cluster with 2 MasterNode (32GB each), 4 DataNode and 1EdgeNode.
... View more
05-04-2016
06:51 PM
Need recommendation for a small 7 Node cluster. Below is what I am planning to do: MasterNode: NameNode, ResourceManager, HBase Master, Oozie Server, Zookeeper Serve DataNode: DataNode, NodeManager, RegionServer Web interface: Ambari server / HUE interface / Zeppelin / Ranger Gateway Node: All the clients (HDFS, Hive, Spark, Pig, Mahout, Tez etc) SecondaryNode: Secondary NameNode, HiveServer2, MySQL, WebHCat server, HiveMetaStore Any issue with this configuration. Also do we need the client on all the machines ? Should I go with HDP 2.3 or 2.4 ? Thanks Prakash
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
04-08-2016
03:06 AM
@dgoodhand How to backup HUE database. I want to have 3.8 on HDP but not able to find some good instructions..
... View more
04-07-2016
05:00 PM
I am running HUE 2.6 on HDP 2.3. Need to upgrade HUE to 3.8. Where can I find the instructions. How to perform in-place upgrade.
... View more
Labels:
- Labels:
-
Cloudera Hue
03-15-2016
05:39 PM
last few lines of namenode log is The reported blocks 1628 needs additional 2 blocks to reach the threshold 1.0000 of total blocks 1629.
The number of live datanodes 4 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
2016-03-15 13:39:28,096 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 193 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from 10.0.2.23:47933 Call#0 Retry#229: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /system/yarn/node-labels. Name node is in safe mode.
The reported blocks 1628 needs additional 2 blocks to reach the threshold 1.0000 of total blocks 1629.
The number of live datanodes 4 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
2016-03-15 13:39:30,097 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 193 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from 10.0.2.23:47933 Call#0 Retry#230: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /system/yarn/node-labels. Name node is in safe mode.
The reported blocks 1628 needs additional 2 blocks to reach the threshold 1.0000 of total blocks 1629.
The number of live datanodes 4 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
... View more
03-15-2016
05:01 PM
@Rahul Pathak I changed the "dfs.namenode.safemode.threshold-pct" from 1 to .9 but it gets changed to "1". This is the information from the Namenode web UI: Safe mode is ON. The reported blocks 1628 needs additional 2 blocks to reach the threshold 1.0000 of total blocks 1629. The number of live datanodes 4 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
... View more
03-15-2016
02:55 PM
2 Kudos
Not able to take Namenode off of SafeMode. Below is what happened:
a.) Ambari instructed to take HDFS checkpointThe
last HDFS checkpoint is older than 12 hours. Make sure that you have taken a
checkpoint before proceedingb.)
b.) I then put the namenode in safemode and created a checkpoint
sudo su hdfs -l -c 'hdfs dfsadmin -safemode enter'
sudo su hdfs -l -c 'hdfs dfsadmin -saveNamespace'
3.) After that I am not able to restart nameNode. I tried to manually leave the safenode by trying the command: hdfs dfsadmin -safemode leave
4.) I also restarted all the datanode but nameNode is not restarting. it times out
5.) I am using Ambari
Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
03-14-2016
01:59 PM
1 Kudo
One of my user is getting the error below: Service 'webhcat' check failed: RA040 I/O error while requesting Ambari He doesnt have any files currently. This is the first time he is using it. Thanks
... View more
Labels:
- Labels:
-
Apache Hive
03-12-2016
12:59 AM
Thank you everyone...all great answers..
... View more