Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2004 | 06-15-2020 05:23 AM | |
| 16516 | 01-30-2020 08:04 PM | |
| 2160 | 07-07-2019 09:06 PM | |
| 8375 | 01-27-2018 10:17 PM | |
| 4747 | 12-31-2017 10:12 PM |
06-16-2019
05:32 AM
also I cant start the journal node ( on the bade namenode ) 2019-06-16 05:29:39,734 WARN namenode.FSImage (EditLogFileInputStream.java:scanEditLog(359)) - Caught exception after scanning through 0 ops from /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000000018783114 while determining its valid length. Position was 1032192
java.io.IOException: Can't scan a pre-transactional edit log.
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LegacyReader.scanOp(FSEditLogOp.java:4974)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanNextOp(EditLogFileInputStream.java:245)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanEditLog(EditLogFileInputStream.java:355)
at org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.scanLog(FileJournalManager.java:551)
at org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:192)
at org.apache.hadoop.hdfs.qjournal.server.Journal.<init>(Journal.java:152)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:99)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:127)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:118)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25415)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
2019-06-16 05:29:39,734 WARN namenode.FSImage (EditLogFileInputStream.java:scanEditLog(364)) - After resync, position is 1032192
... View more
05-13-2019
07:32 AM
1 Kudo
you should be copying the directory structure as it is . Create folder hbase and hbase in /data_metrics/lib/ambari-metrics-collector/ and copy the contents from other place as it is.
... View more
05-13-2019
07:36 AM
1 Kudo
its not suggested to delete the Intermediate files of hbase directory directly. follow this : if you are ok with data erasing : https://cwiki.apache.org/confluence/display/AMBARI/Cleaning+up+Ambari+Metrics+System+Data
... View more
04-30-2019
03:16 PM
@Michael Bronson Any updates?
... View more
04-23-2019
05:21 AM
1 Kudo
@Michael Bronson Out of the box configs are much easier but the config you have implemented is the correct way to integrate Presto with hadoop these files must be present on all the presto node 🙂
... View more
05-13-2019
09:46 PM
Hi can you put datanode in maintenance through bash command or direct python command? I have ginormous and i want to quickly stop and start services. I am using hadoop-daemon.sh start to to start a datanode. I know maintenence mode is not from hadoop API that is built as part of Ambari.
... View more
04-06-2019
08:08 PM
1 Kudo
@Michael Bronson Find the hostnames where the "SPARK2_THRIFTSERVER" server is running: # curl -H "X-Requested-By: ambari" -u admin:admin -X GET "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts?(host_components/HostRoles/component_name=SPARK2_THRIFTSERVER)&minimal_response=true" | grep host_name | awk -F":" '{print $2}' | awk -F"\"" '{print $2}' Example Output: newhwx3.example.com newhwx5.example.com Once we know the hosts where the "SPARK2_THRIFTSERVER" is running then we can run the following command by replacing the host newhws3 and newhwx5 to turn ON the maintenance mode for it. # curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn ON Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"ON"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx3.example.com/host_components/SPARK2_THRIFTSERVER"
# curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn ON Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"ON"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx5.example.com/host_components/SPARK2_THRIFTSERVER" . Turn OFF maintenance Mode for Spark2 thrift server on newhwx3 and newhws5 # curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn OFF Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"OFF"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx3.example.com/host_components/SPARK2_THRIFTSERVER"
# curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn OFF Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"OFF"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx5.example.com/host_components/SPARK2_THRIFTSERVER" .
... View more
04-01-2019
09:53 AM
1 Kudo
@Michael Bronson, Permission issue 🙂 Either run this command with hdfs user or change the ownership of /benchmarks/TestDFSIO to root. java.io.IOException: Permission denied: user=root, access=WRITE, inode="/benchmarks/TestDFSIO/io_control/in_file_test_io_0":hdfs:hdfs:drwxr-xr-x
... View more
03-15-2019
12:09 PM
@Jay so I need to change the default port ? , is it the case ?
... View more
03-08-2019
08:46 AM
1 Kudo
Hi @Michael Bronson You are specifying /folder/*.jar. If you want the .jar files from one level deeper, you would specify /folder/*/*.jar. Or, here is an alternative example. [hdfs@c2175-node4 stuff]$ hdfs dfs -find /tmp -name *.jar
/tmp/somefolder/y.jar
/tmp/x.jar
[hdfs@c2175-node4 stuff]$ for result in `hdfs dfs -find /tmp -name *.jar` ; do hdfs dfs -copyToLocal $result; done
[hdfs@c2175-node4 stuff]$ ls -al
-rw-r--r-- 1 hdfs hadoop 0 Mar 8 08:43 x.jar
-rw-r--r-- 1 hdfs hadoop 0 Mar 8 08:43 y.j
... View more