Member since
04-27-2016
61
Posts
61
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1821 | 09-19-2016 05:42 PM | |
437 | 06-11-2016 06:41 AM | |
1538 | 06-10-2016 05:17 PM |
05-31-2018
06:16 PM
While trying to start Namenode from Ambari, its failing with the below error: raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/3.0.0.0-1371/hadoop/bin/hdfs --config /usr/hdp/3.0.0.0-1371/hadoop/conf --daemon start namenode'' returned 1. WARNING: HADOOP_NAMENODE_OPTS has been replaced by HDFS_NAMENODE_OPTS. Using value of HADOOP_NAMENODE_OPTS.
ERROR: Cannot set priority of namenode process 355
Does anybody know a solution Thanks in advance
... View more
Labels:
01-23-2017
06:36 PM
1 Kudo
Looking for help with the below support related questions 1. What versions of the software do you currently provide support for? 2. What was the release data of the oldest version currently supported?
... View more
- Tags:
- Hadoop Core
- hdp-2.3.4
- support
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
- version
Labels:
12-21-2016
07:16 PM
I went back to HDP 2.5.0 and no errors now. Looks like 2.5.3 has this start service issue. Thanks for the help
... View more
12-20-2016
11:00 PM
1 Kudo
If you are looking for a way to see the all retained messages in one go, thats what the parameter '--from-beginning' will help you do in kafka-console-consumer.sh. The below command will help you delete the kafka topic entirely, that way all the messages in the topic will be purged..kafka-topics.sh --zookeeper <ZK quorum> --delete --topic topicname
... View more
12-20-2016
10:45 PM
@Michael Young Thanks. I went ahead and did a 'restart all' on all those services that failed to start. It worked for some of them .However for few others like history server, hive server2, Namenode etc., i am still getting the following error in the log: Could you help me there, Thanks
History Server:
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz 'http://ganne-test0.field.hortonworks.com:50070/webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444' 1>/tmp/tmpCBuCHv 2>/tmp/tmp76U3W6' returned 52. curl: (52) Empty reply from server
100
Hiveserver 2:
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://ganne-test0.field.hortonworks.com:50070/webhdfs/v1/user/hcat?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpNYUL1Z 2>/tmp/tmpij3JWm' returned 7. curl: (7) Failed connect to ganne-test0.field.hortonworks.com:50070; Connection refused
000
... View more
12-20-2016
10:32 PM
1 Kudo
Hi, The question is a little unclear to me. However, from what i understand, you are unable to generate data in Nifi? If so , here is my 2 cents: 1. The 'ExecuteProcess' processor expects an executable script/process/, so make sure you give right 'execute'/write permissions on generate.sh script and provide its correct path(~/iot-truck-streaming/stream-simulator/generate.sh ) 2. Check which user is currently has rights to the directory where the code is stored. You could make user 'nifi' own the folder and run and check.
... View more
12-20-2016
10:21 PM
1 Kudo
Can you try using the argument "--as-parquetfile" and see if that helps. Starting sqoop 1.4.6 , it has this option.
... View more
12-20-2016
10:13 PM
I have faced a similar issue, but mine had to to do with insufficient resources for too many services running. So had to increase the number of data nodes and made changes to YARN and it got resolved.
... View more
12-20-2016
10:07 PM
1 Kudo
Repo Description The DiP API has been tested on below mentioned HDP 2.4 components: Apache Hadoop 2.7.1.2.4 Apache Kafka 0.9.0.2.4 Apache Apex 3.4.0 Apache Hbase 1.1.2.2.4 Apache Hive 1.2.1.2.4 Apache Zeppelin 0.6.0.2.4 Apache Tomcat Server 8.0 Apache Phoenix 4.4.0.2.4 Apache Maven Java 1.7 or later Repo Info Github Repo URL https://github.com/XavientInformationSystems/Data-Ingestion-Platform/tree/master/dataingest-apex Github account name XavientInformationSystems/Data-Ingestion-Platform/tree/master Repo name dataingest-apex
... View more
- Find more articles tagged with:
- ambari-extensions
- Data Ingestion & Streaming
Labels:
12-20-2016
10:02 PM
3 Kudos
While trying to install HDP 2.5.3 on a 4 node cluster via Ambari Wizard, I passed all the steps and got till this point of 'install, start and test' but it gives this warning that many services failed to start. Please see the attachments for pictures. Did any one face this issue?
... View more
12-20-2016
09:43 PM
1 Kudo
Repo Description You can now deploy DataTorrent RTS within the Ambari stack. Such a deployment ensures simplified management of the DataTorrent RTS setup. Note: DataTorrent RTS, powered by Apache Apex, provides a high-performing, fault-tolerant, scalable, easy-to-use data processing platform for batch and streaming workloads. It includes advanced management, monitoring, development, visualization, data ingestion, and distribution features. Repo Info Github Repo URL https://github.com/DataTorrent/ambari-datatorrent-service Github account name DataTorrent Repo name ambari-datatorrent-service
... View more
- Find more articles tagged with:
- ambari-extensions
- Cloud & Operations
Labels:
12-20-2016
09:32 PM
Hi, Hbase tables can be dumped with data from external sources using sqoop, but data cannot be properly accessed by phoenix yet. https://issues.apache.org/jira/browse/SQOOP-2649 issue is still not fully addressed. What you can do is, try other ways of importing
... View more
11-14-2016
03:42 PM
Appreciate your input . Yes the issue was the key tab and principal. Had to kinit with the specific principal instead of the default one and the error was gone.thanks
... View more
11-11-2016
07:31 PM
Running into the errors in the pictures and log file , while trying to submit a topology on a kerberised cluster. Nimbus is running on lake2.field.hortonworks.com. Can some one shed some light here, who are kerberos gurus. Appreciate any help!
... View more
Labels:
10-31-2016
12:30 AM
2 Kudos
i had a similar issue on kafka 0.10.0 ( latest), resolving the hostname or fdqn to ipaddress or loopback address resolved it.
... View more
10-31-2016
12:19 AM
Flume is an engine most significantly used for efficient collecting, aggregating, and moving large amounts of streaming data to HDFS. As suggested above Kafka-flume sink here --> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_HDP_RelNotes/content/ch01s08s01.html will help you push data to kafka topics via flume agents, that should be defined in the kaka-flume configs in your code
... View more
09-28-2016
02:20 AM
@Timothy Spann yes that code was written for storm 0.10. Now I am trying to test that for 1.0.1. Updated the Pom with necessary storm and Kafka versions and added guava dependency as suggested in the above link. Still I am getting build errors
... View more
09-27-2016
11:30 PM
1 Kudo
I am facing the classic storm compatibility issue here with a demo. There is a java file here thats doing " import org.apache.storm.guava.collect.Lists;" With the new 2.5 Sandbox, Storm version is 1.0.1.2.5.0.0-1245. When i am compiling the code by running 'mvn clean package', i am running into compilation and build errors, as you can see in the image attached. I also attached the pom.xml file. Its complaining about guava dependencies, that are shaded in the new storm release. Please help to resolve the dependency issue. Thanks pom.xml
... View more
Labels:
09-19-2016
06:34 PM
What you can do as a work around is to ssh into the sandbox using "ssh root@127.0.0.1 -p 2222" and do the ambari-admin-password-reset .This generally works
... View more
09-19-2016
06:30 PM
4 Kudos
@Kirk Haslbeck What worked for me was restarting the ambari-server and ambari agent.
... View more
09-19-2016
05:56 PM
1 Kudo
First time users receive a $200 credits for HDP sandbox from Azure under Subscription name- one month free trial. Yes you will be provided with the bill for Sandbox usage.
... View more
09-19-2016
05:42 PM
4 Kudos
Go to VirtualBox Preferences>Network>Host-only Networks and Add host-only network for vboxnet0. This will solve the problem
... View more
09-19-2016
05:37 PM
1 Kudo
On HDP 2.5 Sandbox, my YARN application is submitted using Slider, but it is not running for more than 2 mins. 3 Containers are allocated. It goes to fails after the 2min 32sec mark and goes to finished state. yarn-log.zipI am attaching the YARN application log for reference. I can provide other additional logs/ details if required. Thanks in advance for the help.
... View more
08-29-2016
12:17 AM
@Ted Yu Thanks. I see that my hbase-default.xml which is packaged in the "hbase-common-1.1.2.2.5.0.0-817.jar" has the 'hbase.master.logcleaner.plugins' value set to 'org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner'. Attaching my hbase-site.xml and hbase-default.xml files. Please let me know if you can find the issue. I also keep getting the below error in a storm topology: 'java.lang.RuntimeException: hbase-default.xml file seems to be for an older version of HBase (1.1.2), this version is 1.1.2.2.5.0.0-817'
... View more
08-26-2016
08:36 PM
Sure. I am attaching the pom.xml. Yes kafka dependecies are taken care of in the file. Infact this topology runs completely fine on HDP 2.4, 2.3. Please let me know any inputs.
... View more
08-26-2016
06:31 PM
Thanks. Added the .out files. Please let me know your thoughts
... View more
08-26-2016
06:27 PM
On HDP 2.5 TP Sandbox, running a storm topology for a telcom demo is giving me the below error: java.lang.NoSuchFieldError: PLAINTEXTSASL at kafka.utils.CoreUtils$.isSaslProtocol(CoreUtils.scala:282) at kafka.consumer.SimpleConsumer.<init>(SimpleConsumer.scala:46) at kafka.javaapi.consumer.Si.. Attaching a picture. Can anybody help me identify the issue? Is it related to Kerberos and SSL on Kafka? Thanks
... View more
08-26-2016
06:19 PM
On HDP 2.5 TP sandbox, HBase 's both master and Region servers are shutting down continuously. Attempting to restart the service from ambari also resulted in service going down after few seconds. Attaching the logs for reference. Any inputs appreciated. Thanks.
... View more
07-29-2016
02:37 AM
1 Kudo
Please check for container size and increase the memory. Also check the jvm options
... View more