Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3174 | 09-16-2016 11:56 AM | |
1354 | 09-13-2016 08:47 PM | |
5344 | 09-06-2016 11:00 AM | |
3093 | 08-05-2016 11:51 AM | |
5169 | 08-03-2016 02:58 PM |
07-22-2016
08:11 PM
2 Kudos
Hi @AR Firstly please check if Hbase still has XaSecureAuthorization Coprocessor conf inside hbase-site.xml. Look for below properties. If its there then kindly remove it from through UI and restart the Hbase services and then check if underlining HDFS has correct permsiion and see if that resolve the issue hbase.coprocessor.master.classes hbase.coprocessor.region.classes
... View more
07-21-2016
01:51 PM
1 Kudo
@Rajib Mandal I believe this is not Ambari UI page, looks like you have other tomcat instance running on port 8080, kindly stop that tomcat and restart the ambari server again.
... View more
07-18-2016
02:35 PM
The only recommended way is to upgrade the cluster to HDP2.4.2. You can also install spark 1.6.1 manually on HDP 2.3.2 but we don't recommend that. Thanks.
... View more
07-18-2016
02:30 PM
1 Kudo
Hi @pooja khandelwal As mentioned in the same doc link, spark 1.6.1 comes with spark hbase connector which under Technical preview(TP), moreover spark 1.6.1 is only certified on HDP 2.4.2. On the other side HDP 2.3.2 has spark 1.4.1 which doesn't support spark Hbase connector in HDP.
... View more
06-30-2016
05:06 PM
2 Kudos
I don't believe we support Ubuntu16 yet with latest version, however it will supported once we have Amabri 2.4 https://issues.apache.org/jira/browse/AMBARI-16931
... View more
06-29-2016
10:01 PM
once upgrade completed we also need to clicked on finalize button to complete the upgrade process. if you have done this and still see previous version of spark then you should login to node and manually check spark package version through "rpm -qa|grep spark" command.
... View more
06-29-2016
08:44 PM
Hi @Jan Kytara I tried reproducing this issue on my cluster with your dataset and table but looks like it working fine even with skip.header.line.count parameter. hive> select count(*) from corrupt_rows;
Query ID = hdfs_20160622192941_a2505b4a-96a7-4148-87ce-a52e92bd75c7
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1466074160497_0010)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 1 1 0 0 0 0
Reducer 2 ...... SUCCEEDED 1 1 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 4.89 s
--------------------------------------------------------------------------------
OK
90
Time taken: 5.467 seconds, Fetched: 1 row(s)
-bash-4.1$ wc -l data.txt
91 data.txt
-bash-4.1$
Which HDP version you are using?
... View more
06-29-2016
07:38 PM
2 Kudos
Your sorting should happens on the basis of the key, here is an example for scala. val file = sc.textFile("some_local_text_file_pathname")
val wordCounts = file.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _, 1) // 2nd arg configures one task (same as number of partitions)
.map(item => item.swap) // interchanges position of entries in each tuple
.sortByKey(true, 1) // 1st arg configures ascending sort, 2nd arg configures one task
.map(item => item.swap)
... View more