Member since
02-01-2019
650
Posts
143
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2864 | 04-01-2019 09:53 AM | |
1499 | 04-01-2019 09:34 AM | |
7082 | 01-28-2019 03:50 PM | |
1608 | 11-08-2018 09:26 AM | |
3872 | 11-08-2018 08:55 AM |
07-04-2018
12:16 PM
@sudhir reddy SqlContext is not available by default in Spark2 shell. Create a sqlContext using below statement after launching the spark-shell and then you can read the json using this sqlContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc) Let me know if this helps.
... View more
07-03-2018
04:21 PM
@M Sainadh You'd need Ambari Infra , Hbase and Kafka up and running to be able to bring Atlas service up and running. ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing This says that the hbase not in healthy state. Do check if all the Region servers are up and running. If everything is good we'd need to take a look at hbase master logs and whats wrong there.
... View more
07-03-2018
12:52 PM
@Bhushan Kandalkar Try Copying the jars to /usr/hdp/current/ranger-usersync/lib/
... View more
07-03-2018
12:45 PM
@vivek jain Glad that it worked, Would you mind marking this thread as closed by clicking on "Accept" and asking a new question with the code used and the console output.
... View more
07-02-2018
04:18 PM
@vivek jain It seems that the app is not picking up the hbase-site.xml and is connecting to localhost (connectString=localhost:2181). copy hbase-site.xml to /etc/spark/conf/ in the node where you are launching the job and also pass the hbase-site.xml using --files in spark-submit command (--files /etc/spark/conf/hbase-site.xml).
... View more
07-02-2018
04:07 PM
This is not how HDFS works 🙂 you will have to configure NFS to be able to cd and other stuff (https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_hdfs-nfs-gateway-user-guide/content/hdfs-nfs-gateway-user-guide.html) Alternatively you can "wget" the files to hdfs client machine and then copy the files to hdfs using "hdfs dfs -put <local-source> <hdfs destination>"
... View more
07-02-2018
02:34 PM
1 Kudo
A connection should be established before running the query. Pelase refer : https://community.hortonworks.com/articles/155890/working-with-beeline.html
... View more
07-02-2018
02:29 PM
1 Kudo
@Hugo Cosme Not sure how you are doing the "ls" in terminal. Ambari interface list and the list from "hdfs dfs -ls /tmp/" command terminal should match.
... View more
06-21-2018
06:30 AM
@Robert Cornell, Once you pass a file using --files it will be in the containers current working directory. so no need to provide path in "-Dlog4j.configuration=config/log4j.properties" Instead just pass the file name as provided by @Felix Albani in above comment.
... View more
06-20-2018
03:43 PM
I think this is completely off from what it is described in the question description. If disabling queue is your current goal. Goto Ranger admin UI -> Service Manager -> Click Yarn -> Create/edit existing policy. There you will find toggle to enable/disable, Choose disable.
... View more