Member since
09-17-2016
31
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1861 | 03-23-2020 11:38 PM | |
14324 | 07-27-2018 08:45 AM | |
3123 | 05-09-2018 08:28 AM | |
975 | 10-21-2016 06:29 AM |
07-24-2018
04:29 PM
Hi, I am getting the below error after checking log ( log while starting datanode). Datanode stoping immediately after start. tailf /var/log/hadoop/hdfs/hadoop-hdfs-datanode-[hostname].log 2018-07-24 10:45:18,282 INFO common.Storage (Storage.java:tryLock(776)) - Lock on /mnt/dn/sdl/datanode/in_use.lock acquired by nodename 55141@dat01.node
2018-07-24 10:45:18,283 WARN common.Storage (DataStorage.java:loadDataStorage(449)) - Failed to add storage directory [DISK]file:/mnt/dn/sdl/datanode/
java.io.FileNotFoundException: /mnt/dn/sdl/datanode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:245)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:231)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:779)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:322)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:438)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:417)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:595)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1543)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1504)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:272)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:768)
at java.lang.Thread.run(Thread.java:748)
2018-07-24 10:45:18,283 ERROR datanode.DataNode (BPServiceActor.java:run(780)) - Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to nam02.node/192.168.19.3:8020. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:596)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1543)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1504)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:272)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:768)
at java.lang.Thread.run(Thread.java:748)
... View more
Labels:
- Labels:
-
Apache Hadoop
05-10-2018
07:16 AM
Nice explanation @Jay Kumar SenSharma
... View more
05-10-2018
07:05 AM
I tried and verified in my 10 node cluster. It worked perfectly.
... View more
05-09-2018
08:31 AM
We can delete kevin han. If you want you can check my solution. It worked for me.
... View more
05-09-2018
08:31 AM
Thanks Umair Khan. I got the solution.
... View more
05-09-2018
08:28 AM
We can check any client/service for the node using the below url. http://ambari_host:8080/api/v1/clusters/PANDA/hosts/host_name/host_components The below commands will be used for deleting client/service. Here PANDA is cluster name. ambari_host is ambari_server_host. host_name is fqdn of host name. export AMBARI_USER=rambabu
export AMBARI_PASSWD=password
curl -u $AMBARI_USER:$AMBARI_PASSWD -H 'X-Requested-By: ambari' -X GET "http://ambari_host:8080/api/v1/clusters/PANDA/hosts/host_name/host_components/MYSQL_SERVER"
curl -u $AMBARI_USER:$AMBARI_PASSWD -H 'X-Requested-By: ambari' -X DELETE "http://ambari_host:8080/api/v1/clusters/PANDA/hosts/host_name/host_components/MYSQL_SERVER"
... View more
05-08-2018
01:37 PM
Hi, I moved "mysqlserver" to node2 from node1 in hive using ambari. In between mysqlserver install got failed. Mysqlserver moved to node2 but existing node1 mysqlserver has not been deleted. I tried to delete using ambari. But delete option is disable. Please check the attached screenshot. Looking for your reply.
... View more
Labels:
- Labels:
-
Apache Hive
12-29-2016
11:26 AM
1 Kudo
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
import java.util.Properties;
public class KafkaProducer {
private static Producer<Integer, String> producer;
private final Properties properties = new Properties();
public KafkaProducer() {
properties.put("metadata.broker.list", "buckland:6667,laverne:6667,mahoney:6667");
properties.put("serializer.class", "kafka.serializer.StringEncoder");
properties.put("request.required.acks", "1");
properties.put("security.protocol", "PLAINTEXTSASL");
properties.put("producer.type", "async");
ProducerConfig config = new ProducerConfig(properties);
producer = new Producer<Integer, String>(config);
}
public static void main(String[] args) {
System.out.println("in Kakfa Producer ****");
new KafkaProducer();
String topic = "dev.raw.mce.energy.gas";
String msg = "***** message from KafkaProducer class *****";
KeyedMessage<Integer, String> data = new KeyedMessage<>(topic, msg);
producer.send(data);
producer.close();
System.out.println(" Kakfa Producer is over ****");
}
}
-- From Program Producer
spark-submit \
--master local[2] \
--num-executors 2 \
--driver-memory 1g \
--executor-memory 2g \
--conf "spark.driver.extraJavaOptions= -Djava.security.auth.login.config=/tmp/rchamaku/kafka/kafka_client_jaas.conf" \
--class KafkaProducer \
--name "Sample KafkaProducer by Ram" \
/tmp/rchamaku/kafka/TestKafka-0.0.1-SNAPSHOT-driver.jar
... View more
Labels:
- Labels:
-
Apache Kafka
10-21-2016
06:29 AM
Now it's working I made one change in workflow.xml replaced <file>
/dev/datalake/app/mce/oozie/rchamaku/mercureaddin_2.10-0.1-SNAPSHOT.jar#mercureaddin.jar
</file>
with
<file>
/dev/datalake/app/mce/oozie/rchamaku/mercureaddin_2.10-0.1-SNAPSHOT.jar#mercureaddin_2.10-0.1-SNAPSHOT.jar
</file> Regards, Rambabu.
... View more
- « Previous
-
- 1
- 2
- Next »