Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2712 | 04-27-2020 03:48 AM | |
| 5269 | 04-26-2020 06:18 PM | |
| 4440 | 04-26-2020 06:05 PM | |
| 3558 | 04-13-2020 08:53 PM | |
| 5364 | 03-31-2020 02:10 AM |
04-05-2017
12:09 PM
@Sai Dileep
Good to know that your original issue is resolved. For the logging related issue please check what is the output of the following command and if your changes are getting reflected there? Specially the "zookeeper.log.file" and the "zookeeper.root.logger" if it is pointing correctly # ps -ef | grep zookeeper
516 969 0 0 Apr03 ? 00:01:52 /usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE ........................ . Although it is good practice to have the hostname appended to the zookeeper log file name that way it makes it easier for us to differentiate between logs from different hosts. # grep 'ZOO_LOG_FILE' /usr/hdp/current/zookeeper-server/bin/zkServer.sh
ZOO_LOG_FILE=zookeeper-$USER-server-$HOSTNAME.log .
... View more
04-05-2017
11:29 AM
1 Kudo
@oula.alshiekh@gmail.com alshiekh There are basically two methods which ship with Hadoop: "shell" and "sshfence". The sshfence option SSHes to the target node and uses fuser to kill the process listening on the service’s TCP port. In order for this fencing option to work, it must be able to SSH to the target node without providing a passphrase. You can define username though, one must also configure the dfs.ha.fencing.ssh.private-key-files option, which is a comma-separated list of SSH private key files.
However you can define the username/port/timeout of your choice as mentioned below. "sshfence([[username][:port]])" <property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence([[username][:port]])</value>
</property> [1] Reference: https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html .
... View more
04-05-2017
09:44 AM
@Sai Dileep Please try the following: 1). Make sure that your "Advanced zookeeper-log4j" from Ambari UI has the following lines to make sure that we are running the zookeeper in DEBUG mode and the log file location is including the correct "${zookeeper.log.dir}" log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE
.
.
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/zookeeper.log
2). Restart the zookeeper and then check the log. Please share the log. /var/log/zookeeper/zookeeper.log 3). Also please check the OS resources are enough or not? # free -m
# top 4). Try deleting the PID file of zookeerper and then restart again. /var/run/zookeeper/zookeeper_server.pid .
... View more
04-05-2017
06:28 AM
1 Kudo
@ARUN You can take the following references to achieve the same using MySQL -->Kafka --> HBase: 1). https://github.com/wushujames/kafka-mysql-connector 2) https://github.com/mravi/kafka-connect-hbase . If you are open for other options like using "sqoop" then you can refer to the following as well: 1). https://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html#_importing_data_into_hbase 2). http://www.dummies.com/programming/big-data/hadoop/importing-data-into-hbase-with-sqoop/ 3). https://acadgild.com/blog/how-to-import-table-from-mysql-to-hbase/ .
... View more
04-05-2017
05:53 AM
@heta desai Copying snippet from: http://stackoverflow.com/questions/16546040/store-images-videos-into-hadoop-hdfs It is absolutely possible without doing anything extra. Hadoop provides us the facility to read/write binary files. So, practically anything which can be converted into bytes can be stored into HDFS(images, videos etc). To do that Hadoop provides something called asSequenceFiles. SequenceFile is a flat file consisting of binary key/value pairs. The SequenceFile provides a Writer, Reader and Sorter classes for writing, reading and sorting respectively. So, you could convert your image/video file into a SeuenceFile and store it into the HDFS. . Some Examples: http://www.tothenew.com/blog/how-to-manage-and-analyze-video-data-using-hadoop/ https://content.pivotal.io/blog/using-hadoop-mapreduce-for-distributed-video-transcoding .
... View more
04-05-2017
04:33 AM
@Lucy zhang If you are using Sandbox then you should using ssh port 2222 to connect to instead of 22 to access all the commands. # ssh user@192.168.12.1 -p 2222 . If it is not a sandbox then you should make sure that you have installed the Yarn Clients to the host where you are trying to run the yarn commands. If you have already installed Yarn clients on that host then Which yarn command exactly is not working? Still if you face any error/exeption then please share the trace.
... View more
04-05-2017
02:10 AM
@darkz yu Please check the "templeton.hive.properties" values from your "Advanced webhcat-site" in ambari UI. Then verify if you are able to connect to the host/port defined there using telnet? Example:
templeton.hive.properties = hive.metastore.local=false,hive.metastore.uris=thrift://erie1.example.com:9083,hive.metastore.sasl.enabled=false,hive.metastore.execute.setugi=true,hive.metastore.warehouse.dir=/apps/hive/warehouse So try checking the port is actually opened and reachable from remote host?
Example: telnet erie1.example.com 9083 . If you are not able to access the mentioned host /port from remote client then please check your firewall (iptables) rules and make the port accessible from remote host.
... View more
04-05-2017
01:33 AM
@rakesh kumar Security settings dictate whether DistCp should be run on the source cluster or the destination cluster. The general rule-of-thumb is that if one cluster is secure and the other is not secure, DistCp should be run from the secure cluster (Where as you are trying to distcp from insecure to secure hadoop cluster) otherwise there may be security- related issues. When copying data from a secure cluster to an non-secure cluster, the following configuration setting is required for the DistCp client: <property>
<name>ipc.client.fallback-to-simple-auth-allowed</name>
<value>true</value>
</property>
When copying data from a secure cluster to a secure cluster, the following configuration setting is required in the core-site.xml file: <property>
<name>hadoop.security.auth_to_local</name>
<value></value>
<description>Maps kerberos principals to local user names</description>
</property>
. See Hortonworks recommendation suggests "one cluster is secure and the other is not secure, DistCp should be run from the secure cluster"
... View more
04-04-2017
05:06 PM
In the mean time , HDP2.6 is released, you might want check the release notes. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_release-notes/content/ch_relnotes.html
... View more
04-04-2017
04:40 PM
@Divakar Annapureddy Unfortunately currently it (HDP 2.6 Sandbox) is not available. Will check for the expected date a& time of it' release.
... View more