Member since
02-27-2019
7
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1513 | 07-13-2016 01:57 AM |
04-03-2017
03:57 PM
ISSUE: All of the datanodes in the cluster are shown as down in Ambari dashboard but hdfs is up, available and healthy. But the output of ps command shows that the service is actually running on the datanodes [root@node2 ~]# ps -ef|grep datanode
hdfs 11150 1 0 Aug09 ? 00:03:01 /usr/jdk64/jdk1.8.0_60/bin/java -Dproc_datanode -Xmx1024m -Dhdp.version=2.3.6.0-3796 -Djava.net.preferIPv4Stack=true
-Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log
-Dhadoop.home.dir=/usr/hdp/2.3.6.0-3796/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.6.0-3796/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.6.0-3796/hadoop/lib/native
-Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.6.0-3796 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-node2.openstacklocal.log
-Dhadoop.home.dir=/usr/hdp/2.3.6.0-3796/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.6.0-3796/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.6.0-3796/hadoop/lib/native:/usr/hdp/2.3.6.0-3796/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.6.0-3796/hadoop/lib/native
-Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log
-XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201608091928 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC
-XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201608091928 -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4
-XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201608091928
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT
-Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
SOLUTION: This issue is resolved in Ambari 2.2.2.0. WORKAROUND: This issue has been resolved in Ambari 2.2.2.0. For previous versions you can resolve this issue by making the following changes 1. Make a backup of the /usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.py script cp /usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.py /usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.py_ORIG 2. Remove the file /usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.pyc rm /usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.pyc 3. Edit the /usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.py script and make the following changes. Change this: sys.path.append(self.script_dir)
to this sys.path.insert(0, self.script_dir)
4. Restart the ambari-agent on the host ROOT CAUSE: In this particular case the installation of additional python modules via 'pip install unixODBC-devel' resulted in ambari cached hdfs.py conflicts with python hdfs libraries. This issue is outlined in the following bugs reported by Hortonworks and Apache. Hortonworks: BUG-54974 - ambari cached hdfs.py conflicts with python hdfs lib resulting into monitoring errors (Resolved in Ambari 2.2.2.0) Apache Ambari: AMBARI-14926 - ambari cached hdfs.py conflicts with python hdfs lib resulting into monitoring errors(Resolved in Apache Ambari 2.4)
... View more
Labels:
04-03-2017
03:54 PM
1 Kudo
PROBLEM: user1 has submitted a yarn application with the application id of application_1473860344791_0001. To look at the yarn logs you would execute the following command; yarn logs -applicationId application_1473860344791_0001
If you run the command above as 'user2' you may see output similar to the following;
16/09/19 23:00:23 INFO impl.TimelineClientImpl: Timeline service address: http://mycluster.somedomain.com:8188/ws/v1/timeline/
16/09/19 23:00:23 INFO client.RMProxy: Connecting to ResourceManager at mycluster.somedomain.com/192.168.1.89:8050
/app-logs/user2/logs/application_1473860344791_0001 does not exist.
Log aggregation has not completed or is not enabled.
ROOT CAUSE: When log aggregation has been enabled each users application logs will, by default, be placed in the directory hdfs:///app-logs/<USERNAME>/logs/<APPLICATION_ID>. By default only the user that submitted the job and members of the hadoop group will have access to read the log files. In the example directory listing below you can see that the permissions are 770. No access for anyone other than the owner and members of the hadoop group.
[root@mycluster ~]$ hdfs dfs -ls /app-logs
Found 3 items
drwxrwx--- - hive hadoop 0 2017-03-10 15:33 /app-logs/hive
drwxrwx--- - user1 hadoop 0 2017-03-10 15:37 /app-logs/user1
drwxrwx--- - spark hadoop 0 2017-03-10 15:39 /app-logs/spark
SOLUTION: The message above can be deceiving and does not necessarily indicate that log aggregation has not been enabled. To obtain yarn logs for an application the 'yarn logs' command must be executed as the user that submitted the application. In the example below the application was submitted by user1. If we execute the same command as above as the user 'user1' we should get the following output if log aggregation has been enabled.
yarn logs -applicationId application_1473860344791_0001
16/09/19 23:10:33 INFO impl.TimelineClientImpl: Timeline service address: http://mycluster.somedomain.com:8188/ws/v1/timeline/
16/09/19 23:10:33 INFO client.RMProxy: Connecting to ResourceManager at mycluster.somedomain.com/192.168.1.89:8050
16/09/19 23:10:34 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
16/09/19 23:10:34 INFO compress.CodecPool: Got brand-new decompressor [.deflate]
Container: container_e03_1473860344791_0001_01_000001 on mycluster.somedomain.com_45454
===============================================================================
LogType:stderr
Log Upload Time:Wed Sep 14 09:44:15 -0400 2016
LogLength:0
Log Contents:
End of LogType:stderr
.....truncated output.....
REFERENCE:
The following document describes how to use log aggregation to collect logs for long-running YARN applications.
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_yarn-resource-management/content/ch_log_aggregation.html
... View more
Labels:
03-28-2019
02:04 PM
It's a Oozie ShareLib problem. The script below works for my: **At Shell** sudo -u hdfs hadoop fs -chown cloudera:cloudera /user/oozie/share/lib/lib_20170719053712/sqoop
hdfs dfs -put /var/lib/sqoop/mysql-connector-java.jar /user/oozie/share/lib/lib_20170719053712/sqoop
sudo -u hdfs hadoop fs -chown oozie:oozie /user/oozie/share/lib/lib_20170719053712/sqoop
oozie admin -oozie http://localhost:11000/oozie -sharelibupdate
oozie admin -oozie http://localhost:11000/oozie -shareliblist sqoop **At Hue Sqoop Client** sqoop list-tables --connect jdbc:mysql://localhost/retail_db --username root --password cloudera More detail at: https://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/
... View more
04-03-2017
03:47 PM
OBSERVATIONS: After navigating to the "Register Version' we see that HDP 2.4 and 2.5 are not registered versions.
We identified some INFO level messages in the logs that tell us there is an issue with one of the xml files read by Ambari during startup;
15 Feb 2017 10:52:15,526 INFO [ambari-client-thread-67] AmbariMetaInfo:1417 - Stack HDP-2.0.6.GlusterFS is not active, skipping VDF
15 Feb 2017 10:52:15,528 INFO [ambari-client-thread-67] AmbariMetaInfo:1415 - Stack HDP-2.5 is not valid, skipping VDF: Could not parse XML /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE/configuration/arcadia-config.xml: javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; systemId: file:/var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE/configuration/arcadia-config.xml; lineNumber: 1; columnNumber: 1; Premature end of file.]; Could not parse XML /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE.bak/configuration/arcadia-config.xml: javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; systemId: file:/var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE.bak/configuration/arcadia-config.xml; lineNumber: 1; columnNumber: 1; Premature end of file.]; Could not parse XML /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE.final.bak/configuration/arcadia-config.xml: javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; systemId: file:/var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE.final.bak/configuration/arcadia-config.xml; lineNumber: 1; columnNumber: 1; Premature end of file.]
15 Feb 2017 10:52:15,528 INFO [ambari-client-thread-67] AmbariMetaInfo:1417 - Stack HDP-2.0.6 is not active, skipping VDF
15 Feb 2017 10:52:15,528 INFO [ambari-client-thread-67] AmbariMetaInfo:1417 - Stack HDP-2.3.ECS is not active, skipping VDF
15 Feb 2017 10:52:15,529 INFO [ambari-client-thread-67] AmbariMetaInfo:1415 - Stack HDP-2.4 is not valid, skipping VDF: Could not parse XML /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE/configuration/arcadia-config.xml: javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; systemId: file:/var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE/configuration/arcadia-config.xml; lineNumber: 1; columnNumber: 1; Premature end of file.]; Could not parse XML /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE.bak/configuration/arcadia-config.xml: javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; systemId: file:/var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE.bak/configuration/arcadia-config.xml; lineNumber: 1; columnNumber: 1; Premature end of file.]; Could not parse XML /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE.final.bak/configuration/arcadia-config.xml: javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; systemId: file:/var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE.final.bak/configuration/arcadia-config.xml; lineNumber: 1; columnNumber: 1; Premature end of file.]
In this case the file /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE/configuration/arcadia-config.xml was an empty. This generated an error that Ambari could not recover from and resulted in a failure to register the HDP stack version. As a result we could not add repositories for HDP 2.4 or 2.5.
WORKAROUND: There are two ways to work around this issue.
1. Put at least one construct in the xml file indicated in the log file. For example; <?xml version="1.0"?>
<!-- Some comment ->
This will prevent Ambari from throwing and exception and failing to register the HDP version. You will need to restart Ambari server after you update this file.
2. You can move the empty xml file to another directory and comment the entry out of the metainfo.xml file for the custom service. In this case we moved arcadia-config to /var/tmp. Modified /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ARCADIA-ENTERPRISE/metainfo.xml and commented out the configuration file arcadia-config.xml. You will need to restart Ambari after making this change.
SOLUTION: This issue will be corrected in Ambari 3.0.0 via BUG-75022
... View more
Labels:
01-11-2016
10:04 PM
2 Kudos
The answer is going to depend on exactly what you are looking for. Knox has a general purpose WebAppSercurityProvider that currently supports layering Cross Site Request Forgery (CSRF) protection onto any of the REST APIs Knox currently supports. The WebAppSecurityProvider is also extensible to so that support for other common WebApp vulnerabilities could be developed and plugged in. Knox does not currently have any support for layering injection vulnerability protection to the supported REST APIs. This is possible for some services given the architecture but would require a much tighter coupling between Knox and those services than would be ideal. Can you please clarify what you mean by "broken authentication" before I tackle that one?
... View more
12-09-2015
05:39 PM
I forgot, and the Namenodes as welll should be added to the hosts file on the windows machine.
... View more