Member since
04-13-2016
422
Posts
150
Kudos Received
55
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1398 | 05-23-2018 05:29 AM | |
4180 | 05-08-2018 03:06 AM | |
1231 | 02-09-2018 02:22 AM | |
2202 | 01-24-2018 08:37 PM | |
5193 | 01-24-2018 05:43 PM |
01-03-2020
11:05 AM
@Jason4Ever : Please if your server is able to connect with internet by doing some PING commands
... View more
12-12-2019
03:12 PM
@PentaReddy: I guess table is not getting dropped. Please run repair statement and the run select statement to retrive the data. MSCK [REPAIR] TABLE table_name [ADD/DROP/SYNC PARTITIONS];
... View more
04-04-2019
05:10 AM
@Bharath Kumar: Yes, you can create no-login them in AD. Technically, they should be login accounts if you are planning to run some service. That may vary based on the senario
... View more
05-23-2018
05:49 AM
@Bharath N
Try to perform the following steps on the failed DataNode: Get the list of DataNode directories from /etc/hadoop/conf/hdfs-site.xml using the following command: $ grep -A1 dfs.datanode.data.dir /etc/hadoop/conf/hdfs-site.xml
<name>dfs.datanode.data.dir</name>
<value>/data0/hadoop/hdfs/data,/data1/hadoop/hdfs/data,/data2/hadoop/hdfs/data,
/data3/hadoop/hdfs/data,/data4/hadoop/hdfs/data,/data5/hadoop/hdfs/data,/data6/hadoop/hdfs/data,
/data7/hadoop/hdfs/data,/data8/hadoop/hdfs/data,/data9/hadoop/hdfs/data</value> Get datanodeUuid by grepping the DataNode log: $ grep "datanodeUuid=" /var/log/hadoop/hdfs/hadoop-hdfs-datanode-$(hostname).log | head -n 1 |
perl -ne '/datanodeUuid=(.*?),/ && print "$1\n"'
1dacef53-aee2-4906-a9ca-4a6629f21347 Copy over a VERSION file from one of the <dfs.datanode.data.dir>/current/ directories of a healthy running DataNode: $ scp <healthy datanode host>:<dfs.datanode.data.dir>/current/VERSION ./ Modify the datanodeUuid in the VERSION file with the datanodeUuid from the above grep search: $ sed -i.bak -E 's|(datanodeUuid)=(.*$)|\1=1dacef53-aee2-4906-a9ca-4a6629f21347|' VERSION Blank out the storageID= property in the VERSION file: $ sed -i.bak -E 's|(storageID)=(.*$)|\1=|' VERSION Copy this modified VERSION file to the current/ path of every directory listed in dfs.datanode.data.dir property of hdfs-site.xml: $ for i in {0..9}; do cp VERSION /data$i/hadoop/hdfs/data/current/; done Change permissions on this VERSION file to be owned by hdfs:hdfs with permissions 644: $ for i in {0..9}; do chown hdfs:hdfs /data$i/hadoop/hdfs/data/current/VERSION; done
$ for i in {0..9}; do chmod 664 /data$i/hadoop/hdfs/data/current/VERSION; done One more level down, there is a different VERSION file located under the Block Pool current folder at: /data0/hadoop/hdfs/data/current/BP-*/current/VERSION This file does not need to be modified -- just place then in the appropriate directories. Copy over this particular VERSION file from a healthy DataNode into the current/BP-*/current/ folder for each directory listed in dfs.datanode.data.dir of hdfs-site.xml: $ scp <healthy datanode host>:<dfs.datanode.data.dir>/current/BP-*/current/VERSION ./VERSION2
$ for i in {0..9}; do cp VERSION2 /data$i/hadoop/hdfs/data/current/BP-*/current/VERSION; done Change permissions on this VERSION file to be owned by hdfs:hdfs with permissions 644: $ for i in {0..9}; do chown hdfs:hdfs /data$i/hadoop/hdfs/data/current/BP-*/current/VERSION; done
$ for i in {0..9}; do chmod 664 /data$i/hadoop/hdfs/data/current/BP-*/current/VERSION; done Restart DataNode from Ambari. The VERSION file located at <dfs.datanode.data.dir>/current/VERSION will have its storageID repopulated with a regenerated ID. If any data is not an issue (say, for example, the node was previously in a different cluster, or was out of service for an extended time), then
delete all data and directories in the dfs.datanode.data.dir (keep that directory, though), restart the data node daemon or servic
... View more
05-23-2018
05:43 AM
@SH Kim Did you try to do a graceful shutdown of region servers and Datanodes decommissioning? as you are using very less number of nodes, always it better to have more than 50% availability.
... View more
05-23-2018
05:29 AM
1 Kudo
@Ruslan Fialkovsky Yes, you can use both desks. But it will not fix your problem of 1st using SSD and next HDD. Both will work in similar fashion.
... View more
05-23-2018
05:23 AM
@vishal dutt Spark driver is not able to find the sqljdbc.jar in class path. When using spark-submit , the application jar along with any jars included with the --jars option will be automatically transferred to the cluster. URLs supplied after --jars must be separated by commas. That list is included in the driver and executor classpaths. Directory expansion does not work with --jars . Else 1) Provide the spark.driver.extraClassPath =/usr/hdp/hive/lib/mysql-connector-java.jar 2) Provide the spark.executor.extraClassPath = /usr/hdp/hive/lib/mysql-connector-java.jar. 3) Add Sqljdbc.jar to the Spark Classpath or add it using -jar option. Hope this helps you.
... View more
05-08-2018
03:06 AM
@Sim kaur <property>
<name>hive.spark.client.connect.timeout</name>
<value>1000ms</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec,ns/nsec), which is msec if not specified. Timeout for remote Spark driver in connecting back to Hive client.
</description>
</property>
<property>
<name>hive.spark.client.server.connect.timeout</name>
<value>90000ms</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified. Timeout for handshake between Hive client and remote Spark driver. Checked by
both processes.
</description>
</property> You can add the above properties in hive-site.xml. As the Spark will refer the hive-site.xml file, it will automatically gets updated in spark config. Hope this helps you.
... View more
04-19-2018
08:50 PM
With HIVE-13670 Till today we need to remember the complete Hive Connection String either you are using direct 1000 port or ZK connection string. After the above Jira we can optimize that by setting up the environment variable(/etc/profile) on the Edge nodes. export BEELINE_URL_HIVE="<jdbc url>" Example: export BEELINE_URL_HIVE="jdbc:hive2://<ZOOKEEPER QUORUM>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" Now just type beeline -u HIVE Even we can setup multiple connection strings just by setting different naming connections like BEELINE_URL_BATCH, BEELIVE_URL_LLAP. Hope this helps you.
... View more
Labels:
03-17-2018
06:19 PM
@David Manukian Try to see if process if running or not ps -ef | grep ambari-server see whether port is lessoning lsof -nalp | grep 8080 Then if everything is working as expected, the try with full qualified domain name like FQDN:8080 Hope this helps you.
... View more