Member since
02-09-2015
95
Posts
8
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5763 | 08-23-2021 04:07 PM | |
1507 | 06-30-2021 07:34 AM | |
1823 | 06-30-2021 07:26 AM | |
14425 | 05-17-2019 10:27 PM | |
3165 | 04-08-2019 01:00 PM |
07-01-2020
04:21 PM
1 Kudo
Hi all, I had faced the same issue while kerberizing a HDP 3.1.0 cluster integrated with Iislon. The Ambari serer is a built on a Postgres DB cluster, hence it was NOT in the hadoop cluster. so it was NOT on the host list. [root@hdpdb ~]# curl -H "X-Requested-By: ambari" -u admin:admin "http://hdpdb.gz.local:8080/api/v1/clusters/Panyu/hosts" I had to add the ambari server from adding host wizard and just install the clients. It worked around the "Host not found, hostname= xxxx" issue in my case. Hope it will help, cheer!
... View more
11-09-2019
04:57 PM
Same issue we are facing in our Pyspark streaming .. Could you please let me know is it possible to handle in python mode spark as well ? https://stackoverflow.com/questions/58755063/failed-to-find-leader-for-topics-java-lang-nullpointerexception-nullpointerexce?noredirect=1#comment103834465_58755063
... View more
05-20-2019
08:11 PM
Seeing exactly the same thing. Query works fine on 2.6.2 but fails with the same error on small table. In both cases, tables are external. Totally stuck at the moment.
... View more
12-19-2018
03:59 PM
Hi Ben, But if we cannot find that file, what should we do? Why these files are missing? Thanks, Mo
... View more
10-26-2018
04:17 PM
1 Kudo
Hi, Unfortunately service migrations from platform to platform are not exceptionally easy to complete. This type of migration is normally handled by our service teams. The process typically requires a number of steps including but not limited to understanding your active use cases and what services you have in your existing cluster. Please reach out to our sales team or your account team if you are an actively licensed customer for guidance.
... View more
04-09-2019
02:44 PM
Hi guys, i followed the above steps, and was able to execute commands like ( show databases, show tables) successfully, also created a database from spark-shell and created a table and inserted some data in it, but i am not able to query the data either from the newly created table from spark, nor the tables that already exists in hive, and getting this error java.lang.AbstractMethodError: Method com/hortonworks/spark/sql/hive/llap/HiveWarehouseDataSourceReader.createBatchDataReaderFactories()Ljava/util/List; is abstract at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataSourceReader.createBatchDataReaderFactories(HiveWarehouseDataSourceReader.java) the commands is as below: import com.hortonworks.hwc.HiveWarehouseSession val hive = HiveWarehouseSession.session(spark).build() hive.createTable("hwx_table").column("value", "string").create() hive.executeUpdate("insert into hwx_table values('1')") hive.executeQuery("select * from hwx_table").show then the error appears, i am using the below command to start spark-shell spark-shell --master yarn --jars /usr/hdp/current/hive-warehouse-connector/hive-warehouse-connector_2.11-1.0.0.3.1.2.0-4.jar --conf spark.security.credentials.hiveserver2.enabled=false
... View more
04-05-2017
10:45 AM
yes you have to
... View more
01-26-2017
07:39 PM
1 Kudo
I was having same issue and was getting same error but when I was running any command through directory where I have installed CDH, I was able to run all the commands - hadoop, hdfs, spark-shell ect. e.g. if your CHD installation location is - /dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin you can test - $ cd /dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin [root@xyz bin]# ./hadoop and if it work then you need to set up environment variable in your Unix master server For RHEL - [root@xyz~]# echo "$PATH" /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin [root@xyz~]# export PATH=$PATH:/path/to/CHD_Installation_bin_path for me it's - /dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin [root@xyz~]# echo "$PATH" /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin to make permanent change - $ echo "export PATH=$PATH:/dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin" >> /etc/profile after that restart(reboot) your server.
... View more
05-27-2016
05:12 AM
Hi, Sorry guys for the reply whitch is too late. Anyways, I tried with different combinition(memory/disk channel etc.) and found flume is either failing of too slow to load larger files (more that 1G). So, I conclude that flume is not good for lage files. Instead, I am now using HDFS NFS gateways to dump file directly to HDFS using scp. Belive me, correctly configured NFS GW and NFS mount point are really cool old boys. Thanks, Obaid
... View more
01-10-2016
05:35 AM
don't forget to reload the collection to see the effect
... View more
- « Previous
- Next »