Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2714 | 04-27-2020 03:48 AM | |
| 5274 | 04-26-2020 06:18 PM | |
| 4443 | 04-26-2020 06:05 PM | |
| 3566 | 04-13-2020 08:53 PM | |
| 5375 | 03-31-2020 02:10 AM |
09-24-2018
11:41 PM
@Sami Ahmad As we basically see a "504" error here which is basically a Proxy Gateway Error hence please check if you have enabled any Http Proxy / Network proxy at your end? I am suspecting that the WebHDFS reuests originated by the Hive View is actially passing through some Http Proxy configured on your cluster. You may need to either make the request bypass the proxy server or make the proxy work. So please check the following: 1. Check the "environment" setting to find out if there is any Http Proxy added? (look for 'proxy') # /var/lib/ambari-agent/ambari-sudo.sh su hdfs -l -s /bin/bash -c 'env' 2. See if you are able to make the WebHDFS call via terminal from ambari server host? And to see the output of the request is being passed via proxy? # curl -ivL -X GET "http://$ACTIVE_NAME_NODE:50070/webhdfs/v1/user/admin?op=GETHOMEDIRECTORY&user.name=admin" 3. You can also refer to the following doc to know how to enable Http Proxy settings inside Ambari Server (and you can also configure ambari JVM property to exclude your cluster nodes requests to NOT be passed via proxy) See: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-administration/content/ch_setting_up_an_internet_proxy_server_for_ambari.html -Dhttp.nonProxyHosts=<pipe|separated|list|of|hosts> 4. Or you can also configure "no_proxy" at the "~/.bash_profile" OR "/etc/profile" level globally to make suere that your internal cluster requests are not passed vias Proxy. no_proxy=".example.com"
export no_proxy .
... View more
09-23-2018
03:15 PM
@Mustafa Ali Qizilbash As we see that the error basically is from your webserver where you have setup the local repo: AH00035: access to /repo/repodata/index.html denied (filesystem path '/var/www/html/repo/repodata/index.html') because search permissions are missing on a component of the path The above kind of error mostly occurs if you have not given proper permissions to your Local repo directories. Please refer to the following doc to get more details about the webserver error "AH00035: access denied because search permissions are missing on a component of the path" : https://wiki.apache.org/httpd/13PermissionDenied Try setting the proper permission to your Web Server repo directories. find /var/www -type d -exec chmod 755 {} \;
find /var/www -type f -exec chmod 644 {} \; . Then restart your webserver and then try again.
... View more
09-23-2018
03:06 PM
@John Seekins The HDF 3.0 installation itself comes with the "libhdfs.so.0.0.0" binary of the correct tested version. You do not need to separately download it from third party because it might cause conflict. # ls -lart /usr/hdp/3.0.0.0-1634/usr/lib/
total 280
-rwxr-xr-x. 1 root root 286676 Jul 12 21:02 libhdfs.so.0.0.0
drwxr-xr-x. 4 root root 32 Jul 21 08:15 ..
lrwxrwxrwx. 1 root root 16 Jul 21 08:15 libhdfs.so -> libhdfs.so.0.0.0
drwxr-xr-x. 2 root root 48 Jul 21 08:15 . . Recommendation will be to perform yum "reinstall" the specific package. As we see that the "libhdfs.so.0.0.0" comes from the following repo/package. # yum whatprovides '*libhdfs.so.0.0.0'
hadoop_3_0_0_0_1634-libhdfs-3.1.0.3.0.0.0-1634.x86_64 : Hadoop Filesystem Library
Repo : HDP-3.0-repo-51
Matched from:
Filename : /usr/hdp/3.0.0.0-1634/usr/lib/libhdfs.so.0.0.0 Hence please try to reinstall that package instead and that will pull the missing file. # yum reinstall "hadoop_3_0_0_0_1634-libhdfs-3.1.0.3.0.0.0-1634.x86_64"
... View more
09-19-2018
08:29 AM
@Mustafa Ali Qizilbash As requested earlier that youa re getting 503 error while accessing the Local Repo Which means your WebServer where you have placed your Repo might be having some issue. Hence can you please share your WebServer logs (httpd logs when you noticed 503 error). http://ufm.hadoop.com/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable .
... View more
09-18-2018
06:35 AM
@Mustafa Ali Qizilbash More detailed steps on setting up a Local repo can be found here: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-installation/content/setting_up_a_local_repository_with_no_internet_access.html Also please make sure that you have the "/etc/yum.repos.d/ambari.repo" file pointing to your Local Repo.
... View more
09-18-2018
06:33 AM
@Mustafa Ali Qizilbash Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os∈fra=stock error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error" The above error indicates that you still have some of the Repositories pointing to Public URL. Please open those files "/etc/yum.repos.d/*.repo" and then disable them which are using Public base_url By setting "enabled=0" 1. Find the REPO files which are using public repos: # grep baseurl /etc/yum.repos.d/*.repo 2. Then open the file which has public pointing repo URL (Instead of pointing to your Local repo host). and then disable it by setting "enabled=0" 3. Perform yum clean all and then continue installing ambari. # yum clean all .
... View more
09-17-2018
12:45 PM
@Lanic
RuntimeError: Failed to execute command '/usr/bin/yum -y install unzip', exited with code '1', message: '
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. As we see a message for yum installation failure which indicates that there is some issue related to yum repos. May be some of the "/etc/yum.repos.d/*.repo" file is not correct. Can you please try running the following command manually on the failing host to see if you are able to install this package on your own? # yum -y install unzip If it fails then please try to clean the yum DB and then check if you are able to install packages manually ont hat host? # yum clean all
# yum -y install unzip If it still fails then please get the "/etc/yum.repos.d/*.repo" from any working node of your cluster (same OS) and then put it on the problematic host then try again. .
... View more
09-17-2018
10:40 AM
@Roberto
Ayuso
The following Article explains in detail about the issue sand it's remedy: is running beyond physical memory limits ..... https://dzone.com/articles/configuring-memory-for-mapreduce-running-on-yarn
... View more
09-17-2018
08:01 AM
@Saurabh In case of Spark2 you can enable the DEBUG logging as by invoking the "sc.setLogLevel("DEBUG")" as following: $ export SPARK_MAJOR_VERSION=2
$ spark-shell --master yarn --deploy-mode client
SPARK_MAJOR_VERSION is set to 2, using Spark2
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://newhwx1.example.com:4040
Spark context available as 'sc' (master = yarn, app id = application_1536125228953_0007).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.0.2.6.5.0-292
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_112)
Type in expressions to have them evaluated.
Type :help for more information.
scala> sc.setLogLevel("DEBUG")
scala> 18/09/17 07:58:57 DEBUG Client: IPC Client (1024266763) connection to newhwx1.example.com/10.10.10.10:8032 from spark sending #69
18/09/17 07:58:57 DEBUG Client: IPC Client (1024266763) connection to newhwx1.example.com/10.10.10.10:8032 from spark got value #69
18/09/17 07:58:57 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 8ms .
... View more
09-17-2018
07:53 AM
@Saurabh For example if you create a "/tmp/log4j.properties" like following: # cat /tmp/log4j.properties
log4j.rootCategory=debug,console
log4j.logger.com.demo.package=debug,console
log4j.additivity.com.demo.package=false
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.out
log4j.appender.console.immediateFlush=true
log4j.appender.console.encoding=UTF-8
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.conversionPattern=%d [%t] %-5p %c - %m%n . Then run the spark-shell as following then you should see DEBUG messages. # su - spark
# spark-shell --master yarn --deploy-mode client --files /tmp/log4j.properties --conf "spark.executor.extraJavaOptions='-Dlog4j.configuration=log4j.properties'" --driver-java-options "-Dlog4j.configuration=file:/tmp/log4j.properties"
Multiple versions of Spark are installed but SPARK_MAJOR_VERSION is not set
Spark1 will be picked by default
2018-09-17 07:52:29,343 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
2018-09-17 07:52:29,388 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
2018-09-17 07:52:29,389 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])
2018-09-17 07:52:29,390 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Renewal failures since startup])
2018-09-17 07:52:29,390 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Renewal failures since last successful login])
2018-09-17 07:52:29,392 [main] DEBUG org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, User and group related metrics
2018-09-17 07:52:29,845 [main] DEBUG org.apache.hadoop.security.SecurityUtil - Setting hadoop.security.token.service.use_ip to true
2018-09-17 07:52:30,386 [main] DEBUG org.apache.hadoop.util.Shell - setsid exited with exit code 0
2018-09-17 07:52:30,501 [main] DEBUG org.apache.hadoop.security.Groups - Creating new Groups object
2018-09-17 07:52:30,523 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader - Trying to load the custom-built native-hadoop library...
2018-09-17 07:52:30,534 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
2018-09-17 07:52:30,535 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader - java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-09-17 07:52:30,535 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-09-17 07:52:30,536 [main] DEBUG org.apache.hadoop.util.PerformanceAdvisory - Falling back to shell based
2018-09-17 07:52:30,537 [main] DEBUG org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback - Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
2018-09-17 07:52:30,709 [main] DEBUG org.apache.hadoop.security.Groups - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
2018-09-17 07:52:30,751 [main] DEBUG org.apache.hadoop.security.UserGroupInformation - hadoop login .
... View more