Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2302 | 12-06-2018 12:25 PM | |
2350 | 11-27-2018 06:00 PM | |
1823 | 11-22-2018 03:42 PM | |
2909 | 11-20-2018 02:00 PM | |
5251 | 11-19-2018 03:24 PM |
10-11-2017
12:03 PM
@Yevgen Shramko, For LLAP server , this should be configured under Advanced hive-interactive site. Check screenshot attached Thanks, Aditya
... View more
10-11-2017
10:10 AM
Adding sample example here without knox. It would be similar with knox as well. [root@xx user]# curl -i -X PUT "http://<namenode host>:50070/webhdfs/v1/tmp/testa/a.txt?user.name=livy&op=CREATE"
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Tue, 26 Sep 2017 17:33:17 GMT
Date: Tue, 26 Sep 2017 17:33:17 GMT
Pragma: no-cache
Expires: Tue, 26 Sep 2017 17:33:17 GMT
Date: Tue, 26 Sep 2017 17:33:17 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: hadoop.auth="u=livy&p=livy&t=simple&e=1506483197716&s=dRvADKPG0lrenLje4fmEEdgChFw="; Path=/; HttpOnly
Location: http://xxx:50075/webhdfs/v1/tmp/testa/a.txt?op=CREATE&user.name=livy&namenoderpcaddress=xxx:8020&createflag=&createparent=true&overwrite=false
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26.hwx)
### second curl call with to Location obtained above
[root@xxx user]# curl -i -T /tmp/a.txt "http://xxx:50075/webhdfs/v1/tmp/testa/a.txt?op=CREATE&user.name=livy&namenoderpcaddress=xxx:8020&createflag=&createparent=true&overwrite=false"
... View more
10-11-2017
08:44 AM
1 Kudo
@Sen Ke, Looks like your 1st call and second call are for same url. Hit the 1st url and capture the response Sample response
HTTP/1.1307 TEMPORARY_REDIRECT Location: http://<xxx>:<yyy>/webhdfs/v1/<PATH>?op=CREATE. Content-Length:0 Now make the 2nd curl call to the url obtained from the response headers of 1st call ( Location Header highlighted in bold ) (ie.. http://<xxx>:<yyy>/webhdfs/v1/<PATH>?op=CREATE...) Thanks, Aditya
... View more
10-11-2017
08:04 AM
2 Kudos
Hi @Sen Ke, Hope you are trying to do some WEBHDFS operations. This is common for WEBHDFS. Ex; If you want to upload a file using WEBHDFS , you have to make 2 curl calls. The response code of 1st curl call will be 307 redirect. The second call should be made to the Location Header obtained from the 1st curl call. Sample Curl Call to Create and Write to a File Step 1: Submit a HTTP PUT request without automatically following redirects and without sending the file data. curl -i -X PUT "http://<HOST>:<PORT>/webhdfs/v1/<PATH>?op=CREATE
[&overwrite=<true|false>][&blocksize=<LONG>][&replication=<SHORT>]
[&permission=<OCTAL>][&buffersize=<INT>]"
The request is redirected to a datanode where the file data is to be written: HTTP/1.1 307 TEMPORARY_REDIRECT
Location: http://<DATANODE>:<PORT>/webhdfs/v1/<PATH>?op=CREATE.
Content-Length: 0
Step 2: Submit another HTTP PUT request using the URL in the Location header with the file data to be written. curl -i -X PUT -T <LOCAL_FILE> "http://<DATANODE>:<PORT>/webhdfs/v1/<PATH>?op=CREATE..."
The client receives a 201 Created response with zero content length and the WebHDFS URI of the file in the Location header: HTTP/1.1 201 Created
Location: webhdfs://<HOST>:<PORT>/<PATH>
Content-Length: 0
The above URL is for normal WEBHDFS. To access it through Knox, you can use the knox url instead of HDFS. You can read more about WEBHDFS here and Knox WEBHDFS here Thanks, Aditya
... View more
10-11-2017
06:10 AM
1 Kudo
@Ashikin, From the log it looks like the port is used by some process. Check if the port is occupied by running, netstat -tupln | grep 9995 sample output : tcp6 0 0 :::9995 :::* LISTEN 4683/java (4683 is the PID here) kill -9 4683 Get the PID from the above command output and kill the process. After killing the process, check if this dir is present /usr/hdp/2.5.3.0-37/zeppelin/webapps . remove it if exists rm -rf /usr/hdp/2.5.3.0-37/zeppelin/webapps Restart zeppelin after doing the above steps. Thanks, Aditya
... View more
10-11-2017
05:24 AM
@Ashikin, There isn't much info in this file. There will be another log file. zeppelin-zeppelin-<hostname>-.log . Can you please attach this file.
... View more
10-11-2017
05:17 AM
You can use the yarn keytab to access this . (/etc/security/keytabs/yarn.headless.keytab). Yes, can you access this using ZK Java API as well. Check the ZK Java example here. Also, were you able to get it using the Ambari REST API as mentioned in my another answer. Thanks, Aditya
... View more
10-11-2017
04:42 AM
Hi @Ashikin, Can you please attach the log files under (/var/log/zeppelin/) folder Thanks, Aditya
... View more
10-10-2017
03:11 PM
Hi @ilia kheifets, Did you do the sudoer configuration for Ambari agents. If not please follow the doc and perform the steps, restart ambari agents and try to start the history server. https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-security/content/how_to_configure_an_ambari_agent_for_non-root.html Thanks, Aditya
... View more
10-10-2017
02:50 PM
2 Kudos
@Nikita Kiselev, You can also get it using Ambari REST API http://<ambari-host>:<ambari-port>/api/v1/clusters/<cluster-name>/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state=ACTIVE To get STANDBY RM , you can replace ha_state with STANDBY in the above URL Thanks, Aditya
... View more