Member since
07-04-2016
40
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2336 | 09-05-2016 12:05 PM | |
861 | 09-05-2016 12:03 PM |
11-17-2016
08:22 AM
@Mugdha wow... that fixed everything. Can't believe I didn't clue into that. THANK YOU!
... View more
11-16-2016
08:45 AM
Hi there, I have a 6-node HDP cluster, and am trying to install and use the HTTPFS server. I have enabled NameNode HA on my cluster. I followed this article by @David Streever: HTTPFS - Configure and Run with HDP The error I'm getting is the following. I have played around with this a ton, and this is actually the third time I've tried installing it on a fresh node. I am hoping someone can help, as I am stumped. [root@xxxxxxxxxxxx sbin]# ./httpfs.sh run
/usr/hdp/current/hadoop-httpfs/sbin/httpfs.sh.distro: line 32: /usr/hdp/current/hadoop/libexec/httpfs-config.sh: No such file or directory
/usr/hdp/current/hadoop-httpfs/sbin/httpfs.sh.distro: line 37: print: command not found
/usr/hdp/current/hadoop-httpfs/sbin/httpfs.sh.distro: line 50: print: command not found
Using CATALINA_BASE: /usr/hdp/current/hadoop-httpfs
Using CATALINA_HOME: /etc/hadoop-httpfs/tomcat-deployment
Using CATALINA_TMPDIR: /usr/hdp/current/hadoop-httpfs/temp
Using JRE_HOME: /usr
Using CLASSPATH: /etc/hadoop-httpfs/tomcat-deployment/bin/bootstrap.jar
Nov 16, 2016 8:29:31 AM org.apache.tomcat.util.digester.SetPropertiesRule begin
WARNING: [SetPropertiesRule]{Server} Setting property 'port' to '' did not find a matching property.
Nov 16, 2016 8:29:31 AM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Nov 16, 2016 8:29:31 AM org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-0
Nov 16, 2016 8:29:31 AM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 511 ms
Nov 16, 2016 8:29:31 AM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Nov 16, 2016 8:29:31 AM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.44
Nov 16, 2016 8:29:31 AM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory webhdfs
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Nov 16, 2016 8:29:32 AM org.apache.catalina.core.StandardContext start
SEVERE: Error listenerStart
Nov 16, 2016 8:29:32 AM org.apache.catalina.core.StandardContext start
SEVERE: Context [/webhdfs] startup failed due to previous errors
Nov 16, 2016 8:29:32 AM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory ROOT
Nov 16, 2016 8:29:32 AM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-0
Nov 16, 2016 8:29:32 AM org.apache.catalina.startup.Catalina start
INFO: Server startup in 737 ms
After seeing this, I thought the first problem seemed to be "Tomcat Native library... was not found ...". So I ran yum install tomcat, which did install things. But then I tried to start httpfs and got all of the same errors... So I may have to undo that now. I'm hoping that I'm just missing something simple. Any help is much appreciated
... View more
Labels:
- Labels:
-
Apache Hadoop
09-12-2016
12:31 PM
It caused such a disaster the last time I don't want to mess up anything more than it already is... but maybe that's the best option at this point. Thank you for your advice and help @Artem Ervits, I really appreciate it.
... View more
09-12-2016
12:23 PM
Oops, thank you! I can repost them with x's instead if you want. I've discovered that part of the issue was actually a disk space issue. Thinks are much cleaner now that that's been solved, but my secondary namenode process still says it can't connect.
... View more
09-09-2016
08:57 AM
Hi there, So all of my hosts and services were working. I then tried to Enable NameNode HA, but the wizard failed and so I rolled back using this Here. I'm guessing troubleshooting these errors individually will not get me far, as I'm sure it is not a coincidence that they all happened after this rollback, and only on the secondary NameNode. The other services run and aren't crashing on this host (storm, falcon, yarn, flume, hive, etc.) Is there a way to troubleshoot the SNameNode a bit? I'm very new to hadoop and ambari. Service problems are as follows, and these issues reoccur exactly the same after clean-up, agent restarts, server restarts, etc.: Oozie Server [Errno 111] Connection refused startup succeeds DataNode [Errno 111] Connection refused was up for about 10 minutes. startup fails now RegionServer [Errno 111] Connection refused. startup succeeds, crashed later. Accumulo TServer [Errno 111] Connection refused startup succeeds, but then it fails a few seconds later
... View more
Labels:
09-06-2016
09:24 AM
@lraheja Oh, silly me! That fixed everything and now my namenode is working!!!! Thank you sooo much for your help.
... View more
09-06-2016
07:53 AM
Hi @lraheja, thanks for your response. I ran the command you suggested, and the result is the same error. I had restarted all of my instances, and stopped all services again. Is there something else I could do in addition to this? It also prints out the following attempt ten times which is shown in the stdout for the operation, maybe that's the issue? I turned safemode off using the same command used to turn it off but with "leave". 2016-09-06 07:35:08,376 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://abc.xyz.com -safemode get | grep 'Safe mode is OFF'' returned 1.
And here is the error again: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 103, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 212, in namenode
create_hdfs_directories(is_active_namenode_cmd)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 278, in create_hdfs_directories
only_if=check
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 463, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 460, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 246, in action_delayed
main_resource.resource.security_enabled, main_resource.resource.logoutput)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 135, in __init__
security_enabled, run_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 167, in get_property_for_active_namenode
if INADDR_ANY in value and rpc_key in hdfs_site:
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'dfs.namenode.http-address' was not found in configurations dictionary!
... View more
09-05-2016
12:40 PM
Hi there, I have a fresh installation of HDP 2.3.4 on a 5-node cluster. All of my services were running successfully, with statistics displayed in the widgets. I have not have any NameNode issues up til today. Earlier today I started the "Enable NameNode HA" Wizard. It failed at the first step in the installation phase (I think it was the namenode) and retrying didn't work, but I wasn't able to move forward or back in the process so I left and followed https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Users_Guide/content/_how_to_roll_back_namenode_ha.html. At the end of completing the entire guide (and I've now gone back and done the whole thing over in case I missed something), I started HDFS (step 1.2.13) and the operation failed for the NameNode. I have no idea what to do! Does anyone recognize this error? Here is the output: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 530, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 103, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 212, in namenode
create_hdfs_directories(is_active_namenode_cmd)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 278, in create_hdfs_directories
only_if=check
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 463, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 460, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 246, in action_delayed
main_resource.resource.security_enabled, main_resource.resource.logoutput)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 133, in __init__
security_enabled, run_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 167, in get_property_for_active_namenode
if INADDR_ANY in value and rpc_key in hdfs_site:
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'dfs.namenode.https-address' was not found in configurations dictionary!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
09-05-2016
12:05 PM
I ended up just removing the TServer service from the nodes that were failing. Not really a solution, but the others ones still work fine. Thanks for your help!
... View more
09-05-2016
12:03 PM
It was a proxy issue, I restarted everything after configuring the proxy properly and the widgets worked again.
... View more
08-31-2016
12:09 PM
It seems that there is only one directory in each and they are not "unwelcome", and are owned by hdfs. But thank you for the idea! [EDIT]: hdfs dfs -ls / did show me 8 items, no errors.
... View more
08-31-2016
11:03 AM
@Sagar Shimpi I looked at the options for configuring existing views but I don't think this will fix the issue, as I think it is an HDFS connection/installation problem and not just a UI issue do to the errors. Unless you have an idea for configuration that could fix the "cluster's connection" and the hdfs recognition.
... View more
08-31-2016
10:46 AM
So I just very recently installed HDP 2.3.4.0 using the Ambari Install wizard and a local repository. I installed on 5 nodes according to default/suggested options on the wizard. This is my first experience with the Ambari and HDP environment, so I am a bit lost. After a bit of bug fixing, all of the services are running (dashboard-aug31.png) on all 5 nodes, with no alerts. Yet, the widgets all say n/a or infinitely load, and all of the views are empty with messages like "cluster not connected", "NullPointerException", etc. Obviously there is a large flaw in my setup, and I don't know how to figure out what it is or how to fix it. I can't find anyone posting as to having the same thing happen to them. Does anyone have any ideas? I haven't started actually using it yet, so there is no data anywhere. Here are screenshots of the views: yarn-queue-manager.png smartsense-view.png hive-view.png tez-view.png
... View more
Labels:
08-31-2016
08:51 AM
Thank you for both responses @Josh Elser, at least I know its not some obvious mistake I made. I will re-install the service and see what happens.
... View more
08-30-2016
07:16 AM
@Josh Elser Your suspicion is right: /usr/hdp/2.3.4.0-3485/etc/default/accumulo exists on the nodes which are correctly running TServer, and doesn't exist on the ones that aren't. EDIT: I tried just adding the file to one of the incorrectly-running nodes, and the error changed to "Error: Could not find or load main class org.apache.accumulo.start.Main" ... so @Jonathan Hurley is most likely right about there being an issue with the configuration files, I think? Is there a place either of you recommend I look for this difference in configuration between this node and the others? Thanks for your help.
... View more
08-30-2016
07:11 AM
This is the /usr/hdp/current/accumulo-client/bin/accumulo file that gives the error, which is identical on both nodes, the one whose TServer crashes and one who's tserver seems to work fine. #!/bin/sh
. /usr/hdp/2.3.4.0-3485/etc/default/hadoop
. /usr/hdp/2.3.4.0-3485/etc/default/accumulo
# Autodetect JAVA_HOME if not defined
if [ -e /usr/libexec/bigtop-detect-javahome ]; then
. /usr/libexec/bigtop-detect-javahome
elif [ -e /usr/lib/bigtop-utils/bigtop-detect-javahome ]; then
. /usr/lib/bigtop-utils/bigtop-detect-javahome
fi
export HDP_VERSION=${HDP_VERSION:-2.3.4.0-3485}
export ACCUMULO_OTHER_OPTS="-Dhdp.version=${HDP_VERSION} ${ACCUMULO_OTHER_OPTS}"
exec /usr/hdp/2.3.4.0-3485//accumulo/bin/accumulo.distro "$@"
so I could change the line ". /usr/hdp/2.3.4.0-3485/etc/default/accumulo" but if that's whats causing me problems then all of the nodes running it would/should have the same problem. I will compare their configs a little more to see if there's a difference somewhere.
... View more
08-29-2016
08:19 AM
Hi @Josh Elser, thanks for the response! I can compare their log files but otherwise I'm not sure how exactly to cross-reference them, as there is little to no log of where the problem is happening. What should I look at for cross-referencing other than the logs? I copied over the log files(from my working nodes) and made sure the appropriate names were used, but the behavior was the same. EDIT: I feel like I should add that I didn't copy over the "err" files from the working nodes, as there are no errors so this file exists but is empty. I did remove the err line from both instances, this didn't change the behavior, which I didn't think it would.
... View more
08-26-2016
08:47 AM
Hi @Jonathan Hurley, thanks for the response. So it's a problem with the way Ambari tests whether or not Accumulo TServer is started? That thread indicates a problem with "Ambari" in general, all of my other Ambari services are running. It is only TServer that starts, stops immediately, and says to not be running. If it was running, would there not be logs in those folders? Do you have any suggestions as to what I can do to confirm that this is the problem? As mentioned, I am brand new to Ambari and HDP in general.
... View more
08-25-2016
09:51 AM
So I've just completed the Ambari install wizard (Ambari 2.2, HDP 2.3.4) and started the services.Everything is running and working now (not at first) except for 3/5 hosts have the same issue with the Accumulo TServer. It starts up with the solid green check (photo attached) but stops after just a few seconds with the alert icon. The only information I found about the error is "Connection failed: [Errno 111] Connection refused to mo-31aeb5591.mo.sap.corp:9997" which I showed in the t-server-process. I checked my ssh connection and its fine, and all of the other services installed fine so I'm not sure what exactly that means. I posted the logs below, the .err file just said no such directory, and the .out file is empty. Are there other locations where there is more verbose err logs about this? As said, I am new to the environment. Any general troubleshooting advice for initial issues after installation or links to guides that may help would also be very appreciated. [root@xxxxxxxxxxxx ~]# cd /var/log/accumulo/
[root@xxxxxxxxxxxx accumulo]# ls
accumulo-tserver.err accumulo-tserver.out
[root@xxxxxxxxxxxx accumulo]# cat accumulo-tserver.err
/usr/hdp/current/accumulo-client/bin/accumulo: line 4: /usr/hdp/2.3.4.0-3485/etc/default/accumulo: No such file or directory
... View more
Labels:
- Labels:
-
Apache Accumulo
-
Apache Ambari
08-05-2016
11:54 AM
Hi @sbhat thank you for your response, I tried following that guide but now my epel.repo is broken because it is provided through my network ( I think that makes sense). This is the result. I guess I could get this repository elsewhere but I have a feeling this will cause other large scale issues as I am using a product for my VMs that is provided only through my company's network. Is the only way to bypass the proxy completely? I am hoping there is something else I can do. [root@mo-1184a7ee4 ~]# yum update
Loaded plugins: product-id, subscription-manager
Setting up Update Process
http://my-proxy-addr:8080/mrepo/redhat/6/rhel6epel-x86_64/RPMS.all/repodata/repomd.xml: [Errno 12] Timeout on http://my-proxy-addr:8080/mrepo/redhat/6/rhel6epel-x86_64/RPMS.all/repodata/repomd.xml: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds')
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: epel.repo. Please verify its path and try again
... View more
08-05-2016
08:17 AM
Thank you for the response @zkfs. Here are the errors: falcon-err-failure.png install-start-test-main-page.png select-stack-err.png Relevant/indicative error log from the UI for the falcon error (the only error that actually causes a failure and not just a warning in the Install step: 2016-08-05 07:59:03,052 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-08-05 07:59:03,053 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-08-05 07:59:03,054 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-08-05 07:59:03,055 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-08-05 07:59:03,059 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-08-05 07:59:03,059 - Group['hdfs'] {}
2016-08-05 07:59:03,060 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-08-05 07:59:03,060 - FS Type:
2016-08-05 07:59:03,060 - Directory['/etc/hadoop'] {'mode': 0755}
2016-08-05 07:59:03,075 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-08-05 07:59:03,075 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-08-05 07:59:03,090 - Repository['HDP-2.3'] {'base_url': 'http://xxxxxx:8012/hdp/centos6/HDP-2.3.4.0/', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-08-05 07:59:03,098 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.3]\nname=HDP-2.3\nbaseurl=http://mo-23f02a5a3.mo.sap.corp:8012/hdp/centos6/HDP-2.3.4.0/\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-08-05 07:59:03,099 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://xxxxxxx:8012/hdp/centos6/HDP-UTILS-1.1.0.20/', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-08-05 07:59:03,102 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://xxxxxxxxxxxx/hdp/centos6/HDP-UTILS-1.1.0.20/\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-08-05 07:59:03,103 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-08-05 07:59:03,219 - Skipping installation of existing package unzip
2016-08-05 07:59:03,219 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-08-05 07:59:03,227 - Skipping installation of existing package curl
2016-08-05 07:59:03,228 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-08-05 07:59:03,236 - Skipping installation of existing package hdp-select
2016-08-05 07:59:03,376 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-08-05 07:59:03,384 - Package['falcon_2_3_*'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-08-05 07:59:03,499 - Installing package falcon_2_3_* ('/usr/bin/yum -d 0 -e 0 -y install 'falcon_2_3_*'')
... View more
08-05-2016
08:14 AM
So yum repolist -v showed a failure in resolving the proxy. Thank you for your response @mthiele, at least I now know what the problem most likely is. I am new to working with proxies like this, so please forgive my lack of knowledge. Is the only configuration necessary with my proxy to add those lines -DhttpProxyHost and -DhttpProxyPort to /var/lib/ambari-server/ambari-env.sh? Or is there something else I missed in configuring this? I have played around with this and don't know how else I could input them. Here is the syntax of how I edited the file if my proxy was 111.111.1.111:8080: AMBARI_PASSHPHRASE="DEV"
export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS' -Xms512m -Xmx2048m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -Dhttp.proxyHost=111.111.1.111 -Dhttp.proxyPort=8080'
Here is the exact response to "yum repolist -v" [root@xxxxx ~]# yum repolist -v
Not loading "rhnplugin" plugin, as it is disabled
Loading "product-id" plugin
Loading "subscription-manager" plugin
Updating Subscription Management repositories.
Unable to read consumer identity
Config time: 0.118
Yum Version: 3.2.29
<a href="http://MY_SERVER/ambari/centos6/Updates-ambari-2.2.2.0/repodata/repomd.xml:">http://MY_SERVER/ambari/centos6/Updates-ambari-2.2.2.0/repodata/repomd.xml:</a> [Errno 14] PYCURL ERROR 5 - "Couldn't resolve proxy '|111.111.1.111'"
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-2.2.2.0. Please verify its path and try again
Here are screenshots of the errors seen with the Ambari UI: falcon-err-failure.png install-start-test-main-page.png select-stack-err.png
... View more
08-05-2016
07:36 AM
I double checked that the BASE URL in each repo file is the same as the browser URL. I also did wgets to the repomd.xml files from the nodes and this works (200 OK). I looked at the ambari server configuration file, however I am not sure exactly what I am supposed to change here as the repo links in that file seem to only be for the JDK, which I do not have a repository for and was under the impression I didn't need to have a local repo for. Is this incorrect? What specifically am I supposed to verify in the server configuration file?
... View more
08-04-2016
11:44 AM
Hello, So I'm quite lost in the errors in my cluster at this point. I'm hoping someone would be kind enough to take a look at the problems below and perhaps give me a couple of pointers or somewhere to start looking. Background: I used the hortonworks documentation guidelines to setup ambari server and agents, along with a local repository. I am able to access the local repository through my browser as well as a wget from each node. The registration step goes smoothly and no warnings remain in that step. I am using a proxy and added the appropriate -DhttpProxyHost and -DhttpProxyPort in the way recommended to each host. I have gone through and tested the network configurations given by both of the following links: https://community.hortonworks.com/storage/attachments/2326-network-and-prereq-setup.pdf http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Installing_HDP_AMB/content/_prepare_the_environment.html 1. The "Select Stack" step from the install wizard returns a 404 for my local repository base URL, which wasn't happening before but I don't recall exactly when it stopped working (initially the green checkmarks appeared and it failed later on) 2. The 404 repository base url problem seems to be in the logs for the installation errors in the final step as well, after successfully installing accumulo client and server as well as the data node for the appropriate nodes 3. The first failure is the Falcon server, which is the only "Failure encountered" and seems to trigger the warnings for all of the preceding steps. 4. wget to the baseurl works from each node, however wget to the repomd.xml times out and returns the 404 in the end. Is there anything obvious that I could be missing or an important step that is not included in the documentation? Any advice or ideas are appreciated. Thanks in advance, Savanna
... View more
Labels:
- Labels:
-
Apache Ambari
07-27-2016
01:30 PM
@Ashnee SharmaThe changes to httpd.conf fixed everything, I'm now able to access the local repository 🙂 I also changed ServerName to be <FQDN>:8012. Thank you very much for your help!
... View more
07-27-2016
09:28 AM
Hello, I have gone through the instructions to set up a local repository, but when I try to go to the base URL link on my browser it either times out or gives a 404 Not Found. I'm thinking I must have missed a step somewhere or am simply typing the wrong URL. I would appreciate any help. My httpd server is running: [root@mo-e6f9afb24 /]# /usr/local/apache/bin/apachectl start
httpd (pid 5074) already running
I created the directory /var/www/html/hdp that has the HDP repo (installed using the .repo file, also tried it with the tarball just in case) and had the ambari outside of this folder Here was the response from my reposync and createrepo - Does this look right? The Ambari and HDP-UTILS responses were the same [root@xxxxxxx hdp]# reposync -r HDP-2.4.2.0/
[root@xxxxxxx hdp]# createrepo /var/www/html/hdp/HDP-2.4.2.0/
Spawning worker 0 with 8 pkgs
Workers Finished
Gathering worker results
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
To see if my local repositories are there, I have used the base URL http://<; my server's FQDN >/hdp/HDP-2.4.2.0. I have tried a few different things in terms of file structure, including the step of creating a centos folder inside hdp and including this in the base URL. Is it possible that someone may see a mistake that I've made? Or an idea to troubleshoot the repository to find the issue? I would really appreciate any help.
... View more
Labels: