Member since
04-18-2014
49
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13771 | 05-13-2014 07:38 AM | |
14828 | 05-12-2014 10:22 AM |
05-22-2018
03:35 AM
My scenario is little bit different. I want to import one table from MsSQL to HIVE table. So i copied sqljdbc42.jar to sqoop lib directory in HDFS. But i m stil facing same erroes. Please help me .
... View more
03-23-2018
02:36 PM
You have to run Apache Spark on HDP 2.6. You should start with HDP cluster then add HDF to it. see https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.1/bk_installing-hdf-and-hdp/content/ch_install-ambari.html
... View more
09-27-2015
07:13 AM
> How can I have only 68 blocks? That depends on how much data your HDFS is carrying. Is the number much less than expected, and not match the output of 'hadoop fs -ls -R /' list of all files? The space report says only about 23 MB used by HDFS, so the number of blocks look OK to me. > Also, when I run hive job, it does not go beyond "Running job: job_1443147339086_0002". Could it be related? This would be unrelated, but to resolve the issue consider raising the values under YARN -> Configuration -> Container Memory (NodeManager) and Container Virtual CPUs (NodeManager)
... View more
09-03-2015
05:16 PM
You will need the gateway copy, which exists under /etc/hive/conf/ on a Hive Gateway designated node (check Hive -> Instances in CM to find which hosts have a gateway role).
... View more
05-11-2015
09:37 AM
COLUMNS_OLD is a deprecated table where columns used to be stored. Hive might have some information there for some reason. You can use both COLUMNS_OLD or COLUMNS_V2 when searching for your column.
... View more