Member since
12-14-2016
58
Posts
1
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1369 | 04-19-2017 05:49 PM | |
1182 | 04-19-2017 11:43 AM | |
1791 | 04-19-2017 09:07 AM | |
2884 | 03-26-2017 04:20 PM | |
4550 | 02-03-2017 04:44 AM |
04-19-2017
11:43 AM
1 Kudo
Spark 1.6.2 is available from HDP 2.4.3 to 2.53 ! It is possible to upgrade to Spark 1.6.2 manually, but it is not supported by Hortonworks ! Also there would be issues that might arise after upgradation (manually) with Zeppelin and other associated tools/applications which are built in your environment ! In case if you have paid support, you could raise a ticket and get it resolved ! Cheers, Ram
... View more
04-19-2017
09:07 AM
Hi, When you are trying to unmount or mount new data disk into datanode, node will be restarted ! So it is best to stop all the services on this node and to keep in maintenance mode ! PS: Take backup of your data partition to avoid data loss or corruption ! Cheers, Ram
... View more
04-17-2017
06:10 PM
@vpoornalingam Will you be able to check this out for me?
... View more
04-17-2017
06:10 PM
All, We have been facing an issue while we are trying to insert data from Managed to External Table S3. Each time when we have a new data in Managed Table, we need to append that new data into our external table S3. Instead of appending, it is replacing old data with newly received data (Old data are over written). I have come across similar JIRA thread and that patch is for Apache Hive (Link at the bottom). Since we are on HDP, can anyone help me out with this? Below are the Versions: HDP 2.5.3 Hive 1.2.1000.2.5.3.0-37 create external table tests3prd (c1 string, c2 string) location 's3a://testdata/test/tests3prd/';
create table testint (c1 string,c2 string);
insert into testint values (1,2);
insert into tests3prd select * from testint; (2 times) When I re-insert the same values 1,2 , it overwrites the existing row and replaces with the new record. Here is the S3 external files where each time *0000_0 is overwritten instead a new copy or serial addition. PS: Jira Thread : https://issues.apache.org/jira/browse/HIVE-15199
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
04-17-2017
02:42 PM
Hi, You can find out complete list of service user accounts which are created by default during installation. Each user has its own requirement for different components that we install ! http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_ambari_reference_guide/content/_defining_service_users_and_groups_for_a_hdp_2x_stack.html You can customize the service accounts (root) as per your requirement during final stages of Ambari UI installation.
... View more
04-13-2017
11:19 AM
Can you remove the backslash before the "\"C:\users
1. 'L' is caps for 'Local' not small letter, so it is hdfs dfs -copyFromLocal <source> hdfs://<URI>
2. you have to remove backslash before "C" drive and try !
... View more
04-13-2017
08:48 AM
Can you try manually uploading the file from CLI using the below command?
hdfs dfs -copyFromLocal <localsrc> URI
OR hdfs dfs -put <localsrc> ... <dst>
Eg: hdfs dfs -put localfile /user/hadoop/hadoopfile
... View more
04-05-2017
09:32 AM
Hi @Kuldeep Kulkarni, I have lost id_rsa private key file, now I need to add two more nodes, will it be possible to add the new datanodes? What is the solution for this? Can I generate new keygen and can I apply the new private key in Ambari? Thanks in Advance. Regards, Ram
... View more
03-26-2017
04:20 PM
Thanks for the reply folks. I have found the issue ! When we are importing the data from legacy DB servers using Spark, during the Spark execution, Hive staging files are created in target location where data resides. When we export these data to S3 using disctp, these hive staging also moves to that bucket. So when we query these using hive, it seems to be checking all those hive staging files before throwing the o/p and also number of splits matters which are more in number, I have merged these splits together to have less mappers and to get better performance which is achieved now. I get the count of the 3 million records table in fraction of seconds!
... View more
03-26-2017
04:12 PM
Thanks for the reply ! Bookmarked the link !
... View more