Member since
06-29-2015
47
Posts
8
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
865 | 02-08-2021 08:52 AM | |
1003 | 03-16-2017 04:52 PM |
12-15-2022
03:46 AM
Try the Skivia app. It can sync Grafana and Hive Data without coding. Read more here.
... View more
05-18-2021
08:00 AM
I was able to find out number of mappers used by distcp using below command: MAPPERS=`yarn container -list $app | grep 'Total number of containers' | awk -F: '{print $2}'` Next step is to only look for distcp job which is doing a copy from/to hdfs (and not to S3). What's the best way to get around it?
... View more
02-08-2021
08:52 AM
Sorry, I forgot to post earlier. I was able to fix my own issue. I had to restart Ranger and Solr. One of the Solr instance failed on restart. But I was able to see Audit tab and other setting on Ranger. Thanks for looking into it.
... View more
10-28-2019
12:19 PM
I'm getting the same error. This is the response I received form Cloudera support "Only the dfs commands such as ls/put/mv etc works on wasb using the wasb connector. Admin commands such as dfsadmin as well fsck works only with native hadoop/hdfs implementation"
... View more
10-12-2017
07:02 PM
@Sandeep More I logged in and ran kinit. So have valid ticket and able to run other hdfs commands.
... View more
04-10-2017
08:51 PM
@Neeraj Sabharwal Please see if you can help me.
... View more
03-20-2017
08:20 PM
2 Kudos
@P D That is the usual QA step. Pick and choose from here: https://github.com/aengusrooneyhortonworks/HadoopBenchmarks If you use HDFS, Hive, HBase, choose those applicable. At the minimum you could hive test-bench and teragen/terasort, and maybe one for HBase. You could do those, but it may take time. You
could just login to Hive and run some queries. Then log to HBase and perform
usual commands using hbase-shell and you could also run SQL via Phoenix. This
is a smoke test suite that you could build for the upgrades. You may have to
include tests for all the tools in the ecosystem. There will be Storm topologies that you have to handle. There will be Spark jobs that you have to test etc. A test plan of each tool is a good thing.
... View more
04-05-2017
04:39 PM
In addition also set CLUSTER.OPERATOR if there are any operator roles defined in Ambari. Or else Cluster operators will not be able to login to Logsearch. This is how I set it up. AMBARI.ADMINISTRATOR, CLUSTER.ADMINISTRATOR, CLUSTER.OPERATOR, CLUSTER.USER
... View more
02-15-2017
09:33 PM
Yes it seems to be related. I'll keep eyes on it. Actually, I've just opened a case with Hortonworks too.
... View more
03-10-2018
05:39 PM
I found https://community.hortonworks.com/articles/16144/write-or-append-failures-in-very-small-clusters-un.html
... View more