Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2064 | 06-15-2020 05:23 AM | |
| 17145 | 01-30-2020 08:04 PM | |
| 2232 | 07-07-2019 09:06 PM | |
| 8611 | 01-27-2018 10:17 PM | |
| 4870 | 12-31-2017 10:12 PM |
04-06-2019
08:06 PM
@Jay , when we rrun this API we seen that also spark history became with maintenance mode is it possible to set maintenance mode only to the thrift server and not to spark history ?
... View more
04-06-2019
07:37 PM
as all know we can set any service from ambari GUI to maintenance mode so is it possible to set any ambari service to maintenance mode by API command ?
... View more
Labels:
04-05-2019
01:23 PM
hi all API + how to set both thrift server in ambari to maintenance mode we need to stop both shrift server services , and to set them to maintenance mode so stop both thrift server by API and set both thrift server to maintenance mode by API
... View more
Labels:
03-31-2019
06:13 AM
hi all DFSIO is a built-in benchmark tool for HDFS I/O test. we created the Base Directory to Store Test Results /benchmarks/TestDFSIO and when we run the test we get: hadoop jar /usr/hdp/2.6.4.0-91/spark2/jars/hadoop-mapreduce-client-jobclient-2.7.3.2.6.4.0-91.jar TestDFSIO -write -nrFiles 16 -fileSize 1GB -resFile /tmp/USER-dfsio-write.txt
19/03/31 05:39:57 INFO fs.TestDFSIO: TestDFSIO.1.8
19/03/31 05:39:57 INFO fs.TestDFSIO: nrFiles = 16
19/03/31 05:39:57 INFO fs.TestDFSIO: nrBytes (MB) = 1024.0
19/03/31 05:39:57 INFO fs.TestDFSIO: bufferSize = 1000000
19/03/31 05:39:57 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
19/03/31 05:39:59 INFO fs.TestDFSIO: creating control file: 1073741824 bytes, 16 files
java.io.IOException: Permission denied: user=root, access=WRITE, inode="/benchmarks/TestDFSIO/io_control/in_file_test_io_0":hdfs:hdfs:drwxr-xr-x where we are wrong?
... View more
Labels:
03-15-2019
12:09 PM
@Jay so I need to change the default port ? , is it the case ?
... View more
03-15-2019
08:55 AM
@Jay , for now we start the thrift and its up , but s it will down soon , its happens after ~1 hour or little more for now we get nc -l 10016
Ncat: bind to :::10016: Address already in use. QUITTING. but after some time thrift goes down from the ambari , and then nc command not give above results any way you said "Or change the spark thrift server setting to use specific address instead of "0.0.0.0" and then see if it works " do you means to change the defauls port from 10016 to other as 10055 for example? 0.0.0.0 - isn't real address , or I miss you ?
... View more
03-15-2019
07:35 AM
we have ambari cluster with two thrift server the first thrift server always fail on Address already in use on - master-node1 machine we get the following error on - Thrift server ( the log under /var/log/spark2 ) 19/03/08 08:42:59 ERROR ThriftCLIService: Error starting HiveServer2: could not start ThriftBinaryCLIService
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:10016.
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:109)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:91)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:87)
at org.apache.hive.service.auth.HiveAuthFactory.getServerSocket(HiveAuthFactory.java:241)
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:66)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.BindException: Address already in use (Bind failed)
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:106)
... 5 more The default port for the thrift is 10016 and we do netstat in order to find who use the port as the following netstat -tulpn | grep 10016 we not get nothing , means no application using the port 10016 so we not understand how log say Address already in use , when no application using the port any suggestion example what we get on the good node ( master-node2 ) # netstat -tulpn | grep 10016
tcp6 0 0 :::10016 :::* LISTEN 26092/java
# ps -ef | grep 26092
hive 26092 1 6 07:14 ? 00:01:34 /usr/jdk64/jdk1.8.0_112/bin/java -Dhdp.version=2.6.4.0-91 ........
... View more
Labels:
03-07-2019
04:42 PM
How we can copy recursive jar files from HDFS ( jar files are under sub folders ) to local folder? Example export hdfs_folder=/app/lib export local_folder=/home/work_app while under /app/lib we have the following sub folder with the jar files as /app/lib/folder_jar1
/app/lib/folder_jar2 Under each of above folder we have jar files The following command , will copy only the jar files under /app/lib but not the Jar files under the sub folders as /app/lib/folder_jar1 , /app/lib/folder_jar2 , Etc hadoop fs -copyToLocal $hdfs_folder/*.jar $local_folder
... View more
Labels:
- Labels:
-
Apache Hadoop
02-26-2019
09:10 PM
I searched in google but not find it, is it possible to create link between HDFS folder to local folder? example we want to create link between folder_1 in HDFS to /home/hdfs_mirror local folder HDFS folder: su hdfs $ hdfs dfs -ls /hdfs_home/folder_1
Linux local folder: ls /home/hdfs_mirror
... View more
Labels:
02-26-2019
02:17 PM
but in case under /hdp we have also sub folders , then what we can do ?
... View more