Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2193 | 12-06-2018 12:25 PM | |
2222 | 11-27-2018 06:00 PM | |
1726 | 11-22-2018 03:42 PM | |
2775 | 11-20-2018 02:00 PM | |
5005 | 11-19-2018 03:24 PM |
10-24-2018
05:07 PM
Can you try solution mentioned in this question https://community.hortonworks.com/questions/61415/ranger-audit-to-solr-problem.html
... View more
10-24-2018
04:30 PM
@Lok! Reddy, Did you enable Audit to Solr under Ranger Audit sections. Also, I do not see any services in Ranger. Did you restart all the services after enabling ranger plugins.
... View more
10-19-2018
01:12 AM
@Felix Albani, This worked like a charm. Thanks a lot for your help. Really appreciate 🙂 However in the latest version of Ambari, it should have been handled by Ambari itself. I do not see the manual step in this doc. Must be a doc bug or ambari issue in my cluster. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.1.0/managing-high-availability/content/amb_enable_namenode_high_availability.html
... View more
10-18-2018
02:39 PM
@Felix Albani, Yes I tried copying both core-site.xml and also hdfs-site.xml but still facing the same issue. Attaching some logs, spark thrift server start logs in debug mode and corresponding yarn application logs. yarn-app-logs.txt spark-spark-orgapachesparksqlhivethriftserverhivet.zip Also made sure that "/hadoop/yarn/local/usercache/spark/filecache/10/__spark_conf__.zip/__spark_conf__/__hadoop_conf__/core-site.xml" has correct content.
... View more
10-18-2018
09:25 AM
1 Kudo
@Christos Stefanopoulos, That is the expected behaviour. If you want to achieve this, then you need to create separate configs for each host. For ex: hostA will have only /grid in datanode dirs and hostB will have /grid0 and /grid1 in their datanode dirs. You can do that using Ambari config groups https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-operations/content/using_host_config_groups.html
... View more
10-18-2018
05:35 AM
@Jay Kumar SenSharma, Thanks for the input. I tried running normal HDFS command and it is working fine even when host1 is on standby. I checked Namenode and ZKFC logs but there is nothing much relevant to this. I also checked the memory settings. They are fine. Any idea from where spark picks up the NN info. I am guessing it will read from core-site.xml but is it correct?
... View more
10-18-2018
04:21 AM
I see Spark2 history server available in my cluster. Can you check if Spark2 history server is already installed on that node. Please refer the screenshot For clients, move option is not required. You can install clients on the new node in the similar way. You will see "Spark client" in the dropdown when you click +ADD
... View more
10-17-2018
01:42 PM
@Anpan K, After you run the above snippet content is created as an RDD. You can perform operations on that RDD to whatever you want. For ex: %pyspark
content = sc.textFile("file:///path/example.txt")
content.collect() -------> prints all lines
content.take(1) ----> prints 1 line
lines = content.map(lambda x: len(x)) ----> count no of character of each line
lines.take(5) ---> prints count of character of first 5 lines Similarly you can perform other operations you want.
... View more
10-17-2018
04:47 AM
@Sivakumar Mahalingam, I guess the problem is with quotes. use ' instead of ’
... View more