Member since
10-20-2017
59
Posts
0
Kudos Received
0
Solutions
04-23-2018
04:39 AM
When data is accidentally removed from HDFS, is there a way to recover it from Trash? Trash is enabled to 360. However I don’t see the files in the user’s /.Trash.
... View more
Labels:
- Labels:
-
Apache Hadoop
04-04-2018
07:26 PM
@Geoffrey Shelton Okot I have downloaded the jar already. The question I have is since I’ve already setup the ambari server using mysql DB and mysql jdbc driver, would doing a setup again with bdb and it’s jar cause a conflict? Or can I set up ambari using any number of databases like for eg mysql for ambari, derby for oozie, bdb for falcon etc
... View more
04-04-2018
06:16 PM
Currently my ambari server is setup using mysql DB and corresponding jdbc driver. If I do the setup again to specify a different DB for Falcon will that cause inconsistency? Or I can just setup mysql for ambari and bdb for falcon?
... View more
02-28-2018
12:31 AM
I think I found the answer. The dfs.datanode.dir was found inconsistent as I saw it from the logs. I added a healthy datanode, balanced the cluster then deleted the data direcories from the other inconsistent nodes after taking a backup at /tmp. Restarting after that works fine now.
... View more
02-27-2018
09:12 PM
@Jay Kumar SenSharma So the reason I felt my JDK was corrupt was, replacing the JDK with a fresh copy which I had on my home directory(The same version as the previous one and the same copy of it which I used) fixed my issue. That means some pieces of the existing JDK were lost since it's the same copy I use each time. I didnt find any DK crash report like (hs_err_pid" ...) or JDK level errors. And the initial issue isn't completely resolved yet, once I started my namenode after replacing the JDK as mentioned above and set the env to the right location, the datanode service on all the data nodes came down. Trying to start the service doesn't even show an error this time. It starts fine but comes down immediately without any trace of error.
... View more
02-26-2018
11:36 PM
@Jay Kumar SenSharma Thanks for the prompt response. You are right. Somehow my jdk got corrupted. I've set the JAVA_HOME again and I see the services are starting. Also I see this happening all the time that my JDK is getting corrupte. Do you have any suggestions on the JDK permissions for the ambari user and other services in the hadoop group?
... View more
02-26-2018
09:15 PM
My namenode service suddenly stopped. I don't see any error messages in the log either. When I tried starting it from the Ambari, it says below. ulimit -c unlimited ; /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start namenode'' returned 1 When I tried to start the namenode from the command line, it gives me an error as below. Can't find HmacSHA1 algorithm
... View more
Labels:
- Labels:
-
Apache Hadoop
02-02-2018
11:01 PM
I still face the issue. I'm doing a non-root installation. I build the psutil as root which went fine. But when I try to restart the metrics monitor it fails.
... View more
02-02-2018
08:52 PM
I'm having an issue starting the metrics monitor on one of my nodes. I get the below error. "ImportError: cannot import name _common"
... View more
Labels:
- Labels:
-
Apache Spark
01-18-2018
09:33 PM
with HDP-2.6, I'm facing an issue with the zookeeper-server and client install with the above config. I tried removing and re-installing but that didn't work either. mkdir: cannot create directory `/usr/hdp/current/zookeeper-client': File exists
... View more