Member since
03-06-2020
406
Posts
56
Kudos Received
36
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
910 | 11-21-2024 10:40 PM | |
879 | 11-21-2024 10:12 PM | |
2687 | 07-23-2024 10:52 PM | |
2007 | 05-16-2024 12:27 AM | |
6703 | 05-01-2024 04:50 AM |
07-04-2021
11:19 PM
Hi @roshanbi It looks like you do not have permissions to create an workflow, Please try it from any other user or ask your admin team to grant a privileges to you. I found below Blog where it shows step by step procedure to create an workflow and schedule it in Hue. Please check and follow. https://www.programmersought.com/article/44483680705/
... View more
06-23-2021
07:21 AM
Hi @kairel This is hitting the HS2 thread count limit ,can you follow below steps and let us know if this helps? Go ahead and set the following parameters: 1. Hive > Configs > Advanced > Custom hive-site hive.server2.async.exec.threads=10 hive.server2.async.exec.wait.queue.size=10 2. Restart hiveserver2
... View more
06-23-2021
07:12 AM
Hi @uraz , Please paste the entire console output or screenshot of the error you are facing..Also check the HS2 logs if there is any exception please do attach that as well..
... View more
06-21-2021
08:23 PM
Yes, It seems to be not properly installed. May i know you are using plain Hadoop or CDH or HDP to manage it? If you have followed any document for the hadoop installation provide the link here...
... View more
06-21-2021
09:37 AM
1 Kudo
Hi, Go to /etc/alterenatives provide the output for below commands: ( This is to check if linux alternatives subsystem is pointing to the binaries from an other old version of CDH that you are not using currently) Example: [root@node2 alternatives]# ls -lrth | grep hdfs lrwxrwxrwx 1 root root 62 Aug 15 2020 hdfs -> /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.4951328/bin/hdfs [root@node2 alternatives]# ls -lrth /usr/bin/hdfs lrwxrwxrwx 1 root root 22 Aug 15 2020 /usr/bin/hdfs -> /etc/alternatives/hdfs What is the CDH version you are using currently? Have you recently upgraded the cluster? In my previous comment I have added “#” just for pointing the command if you have added same in the ~/.bash_profile do remove the “#” and try. How you installed hdfs? is it possible for you to re install it?
... View more
06-21-2021
06:31 AM
1 Kudo
Seems to be hdfs is not installed/configured properly, Can you check the path of it by using "which hdfs" command. If you are not able to see the path check the environment variable in "~/.bash_profile" file. set the path something like below and try: ## PATH=$PATH:$HADOOP_HOME/bin then run -> source ~/.bash_profile Below is the output from my test cluster: [root@node2 bin]# which hdfs /usr/bin/hdfs [root@node2 bin]# sudo -u hdfs hdfs dfsadmin -safemode leave Safe mode is OFF [root@node2 bin]# id hdfs uid=993(hdfs) gid=990(hdfs) groups=990(hdfs)
... View more
06-20-2021
07:59 PM
1 Kudo
Hi, Can you try with below command and see? ## sudo -u hdfs hdfs dfsadmin -safemode leave If this doesn't work provide the output of below from the terminal: ## id hdfs
... View more
06-20-2021
07:12 AM
1 Kudo
Safemode in Hadoop is a maintenance state of NameNode, during which NameNode doesn’t allow any modifications to the file system. can you use below command to come out of the safemode and try to create a directory? ## hadoop dfsadmin -safemode leave ## hdfs dfs -mkdir /user/justee
... View more
06-20-2021
06:41 AM
1 Kudo
Hi, Hadoop has native implementations of certain components for both performance and non-availability of Java implementations. These components are available in a single, dynamically-linked native Linux library called the native hadoop library. On the *nix platforms the library is named libhadoop.so. It is just a Warning but not an error and it can be ignored. Please check if you have an issue with other hdfs commands as well...and check the hdfs logs and see if you can get exact errors.
... View more
06-16-2021
07:45 PM
Hi, Suspecting the foc isn't picking up the configured timeout for ha.health-monitor.rpc-timeout.ms and this is causing the failover to fail. To speed up this quota calculation put the following in the NameNode safety valve for hdfs-site.xml: dfs.namenode.quota.init-threads = 16 ha.failover-controller.new-active.rpc-timeout.ms to 90s Try this out .....
... View more