Member since
05-31-2016
89
Posts
14
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4075 | 03-10-2017 07:05 AM | |
5960 | 03-07-2017 09:58 AM | |
3528 | 06-30-2016 12:13 PM | |
5800 | 05-20-2016 09:15 AM | |
27285 | 05-17-2016 02:09 PM |
05-20-2016
09:15 AM
To Conclude, it is not possible to remove the warning message from the Hive query output until migrating to Beeline. I found a very simple and well known workaround to this problem which almost all of us use it in shell scripting. I used the grep tool to remove the warning message from my output. Below is how I modified my script to fix the problem. hive -S -d ns=$hiveDB -d tab=$t -d dunsCol=$c1 -d phase="$ph1" -d error=$c2 -d ts=$eColumnArray -d reporting_window=$rDate -f $dir'select_count.hsql' | grep -v "^WARN" > $gOutPut 2> /dev/null
... View more
05-19-2016
02:22 PM
I am adding the below jar but don't get any warnings. So the above Jira you mentioned is only for the Spark jars. add jar hdfs:///data/95-utilities/hive-contrib-1.1.0-cdh5.5.2.jar;
... View more
05-19-2016
01:12 PM
Do we have a workaround for this @Jitendra Yadav ?
... View more
05-19-2016
12:56 PM
In my beeline's output I am still able to see the header even after setting --showheader=false. Below is the query that I have execute. beeline -u jdbc:hive2://10.241.1.85:10000/kaliamoorthya --silent=true --outputformat=csv2 --showheader=false -e 'select dns,tran_session from testtable group by dns,tran_session order by dns;' Here is the output: dns,tran_session
012345490,06-05-2016T0258
012365279,06-05-2016T0258
012365478,06-05-2016T0258
012365480,06-05-2016T0258
012365481,06-05-2016T0258
012365482,06-05-2016T0258
012365483,06-05-2016T0258
012365484,06-05-2016T0258
012365485,06-05-2016T0258
012365486,06-05-2016T0258
012365487,06-05-2016T0258
012365489,06-05-2016T0258
012365491,06-05-2016T0258
012365492,06-05-2016T0258
012365493,06-05-2016T0258
012365494,06-05-2016T0258
012365495,06-05-2016T0258
012365496,06-05-2016T0258
012365497,06-05-2016T0258
012365498,06-05-2016T0258
012365499,06-05-2016T0258
012365588,06-05-2016T0258 Any clue what is causing this?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
05-19-2016
12:51 PM
Our cluster was recently upgraded to CDH 5.5 and all the outputs generated by hive query has now a warning message shown below WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
But when I use Beeline I don't see them. Is there a way I can get rid of the warning without switching to Beeline at least for now? Hive version is 1.1.0
... View more
Labels:
- Labels:
-
Apache Hive
05-17-2016
02:09 PM
Finally, I got an answer to this. As a developer, one cannot execute dfsadmin commands due to the restriction in the policy. To check the namenode availability I used the below if loop in shellscript which did the trick. It wont tell you exactly the namenode is active but with the loop you can easily execute the program accordingly. if hdfs dfs -test -e hdfs://namenodeip/* ; then
echo exist
else
echo not exist
fi
... View more
05-16-2016
09:49 AM
As a developer how can I check the current state of a given Namenode? I have tried the getServiceState command but that is only intended for the admins with superuser access. Any command that can be run from the edge node to get the status of a provided namemnode??
... View more
Labels:
- Labels:
-
Apache Hadoop
05-11-2016
09:45 AM
1 Kudo
I was browsing through and found ACID tables in Hive. Could any one let me know what exactly it is and does it differs from the normal Hive tables?
... View more
Labels:
- Labels:
-
Apache Hive
04-27-2016
04:48 AM
Finally I fixed it. Thanks @Emil. The command was right but I was using the wrong IP address. I was using HUE's ip as clusterB ip. I replaced it with the namenode ip and it has worked.
... View more
04-26-2016
06:37 AM
I am creating a workflow in which I need to delete some HDFS files which are in a different cluster. Below is the command that i have tried clusterA-ip ] $ hdfs dfs -rm hdfs://clusterB-ip:8888/path/to/the/file.txt But this is not working. Could you please help?
... View more
Labels:
- Labels:
-
Apache Hadoop