Member since
05-30-2019
86
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1623 | 11-21-2019 10:59 AM |
02-08-2021
10:33 AM
Hi @AmirMirza , Thank you for message. I am able to access others services(UI) such as the ranger UI, Yarn UI, zeppelin notebook after using my credential on the KNOX portal. It seems that only the namenode UI has this weird behavior. The namenode use to work couple days aga and we did not change any settings since...
... View more
02-04-2021
04:09 PM
hi, On Ambari, i am trying to access the namenode UI through the KNOX portal using my credentials: After entering my valid user and password i get the Knox portal again: knox logs: 21/02/04 18:25:48 ||aa72c68a-11yf-10ae-b720-c01b2b456pcq|audit|00.0.00.00|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|unavailable|Request method: POST
21/02/04 18:25:49 ||aa72c68a-11yf-10ae-b720-c01b2b456pcq|audit|00.0.00.00|KNOXSSO|user1|||authentication|uri|/gateway/knoxsso/api/v1/websso?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|
21/02/04 18:25:49 ||aa72c68a-11yf-10ae-b720-c01b2b456pcq|audit|00.0.00.00|KNOXSSO|user1|||authentication|uri|/gateway/knoxsso/api/v1/websso?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|Groups: [CN=HDP_DEV_ADMIN_HDP,OU=HDP,OU=Applications,OU=Groups,OU=Cal,DC=corp,DC=cal,DC=ca, CN=HDP_USERS,OU=HDP,OU=Applications,OU=Groups,OU=Cal,DC=corp,DC=cal,DC=ca, CN=HDP_PRD_CG_SS_STOR_ANY_RW,OU=HDP,OU=Applications,OU=Groups,OU=Cal,DC=corp,DC=cal,DC=ca, CN=HDP_USERS,OU=HDP,OU=Applications,OU=Groups,OU=Cal,DC=corp,DC=cal,DC=ca, CN=HDP_PRD_ADMIN_HDP,OU=HDP,OU=Applications,OU=Groups,OU=Cal,DC=corp,DC=cal,DC=ca]
21/02/04 18:25:49 |||audit|00.0.00.00|KNOXSSO|user1|||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|Response status: 303
21/02/04 18:25:49 ||d111c112-1c11-1ec1-bc1b-1116039111dq|audit|00.0.00.00|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/redirecting.html?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|unavailable|Request method: GET
21/02/04 18:25:49 ||d111c112-1c11-1ec1-bc1b-1116039111dq|audit|00.0.00.00|knoxauth|anonymous|||authentication|uri|/gateway/knoxsso/knoxauth/redirecting.html?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|
21/02/04 18:25:49 |||audit|00.0.00.00|knoxauth|anonymous|||access|uri|/gateway/knoxsso/knoxauth/redirecting.html?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|Response status: 200
21/02/04 18:25:49 ||d221c112-1c12-1ec1-bc1b-2212039121dq|audit|00.0.00.00|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/bootstrap.min.css|unavailable|Request method: GET
21/02/04 18:25:49 ||d221c112-1c12-1ec1-bc1b-2212039121dq|audit|00.0.00.00|knoxauth|anonymous|||authentication|uri|/gateway/knoxsso/knoxauth/styles/bootstrap.min.css|success|
21/02/04 18:25:49 |||audit|00.0.00.00|knoxauth|anonymous|||access|uri|/gateway/knoxsso/knoxauth/styles/bootstrap.min.css|success|Response status: 200
21/02/04 18:25:49 ||d331j332-4u44-3ec3-bc3b-3333034444dq|audit|00.0.00.00|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/knox.css|unavailable|Request method: GET
21/02/04 18:25:49 ||d331j332-4u44-3ec3-bc3b-3333034444dq|audit|00.0.00.00|knoxauth|anonymous|||authentication|uri|/gateway/knoxsso/knoxauth/styles/knox.css|success|
21/02/04 18:25:49 |||audit|00.0.00.00|knoxauth|anonymous|||access|uri|/gateway/knoxsso/knoxauth/styles/knox.css|success|Response status: 200
21/02/04 18:25:50 ||11d9410d-8181-861c-7145-8418b134c177|audit|00.0.00.00|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/images/loading.gif|unavailable|Request method: GET
21/02/04 18:25:50 ||11d9410d-8181-861c-7145-8418b134c177|audit|00.0.00.00|knoxauth|anonymous|||authentication|uri|/gateway/knoxsso/knoxauth/images/loading.gif|success|
21/02/04 18:25:50 |||audit|00.0.00.00|knoxauth|anonymous|||access|uri|/gateway/knoxsso/knoxauth/images/loading.gif|success|Response status: 200
21/02/04 18:25:50 ||5bb8b899-b412-4105-bd56-ab7i1ea7auab|audit|00.0.00.00|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/redirecting.jsp?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|unavailable|Request method: GET
21/02/04 18:25:50 ||5bb8b899-b412-4105-bd56-ab7i1ea7auab|audit|00.0.00.00|knoxauth|anonymous|||authentication|uri|/gateway/knoxsso/knoxauth/redirecting.jsp?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|
21/02/04 18:25:50 |||audit|00.0.00.00|knoxauth|anonymous|||access|uri|/gateway/knoxsso/knoxauth/redirecting.jsp?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|Response status: 200
21/02/04 18:25:50 ||aeb1ce2a-1147-4d3a-9cd8-722623b9d349|audit|00.0.00.00|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/images/loading.gif|unavailable|Request method: GET
21/02/04 18:25:50 ||aeb1ce2a-1147-4d3a-9cd8-722623b9d349|audit|00.0.00.00|knoxauth|anonymous|||authentication|uri|/gateway/knoxsso/knoxauth/images/loading.gif|success|
21/02/04 18:25:50 |||audit|00.0.00.00|knoxauth|anonymous|||access|uri|/gateway/knoxsso/knoxauth/images/loading.gif|success|Response status: 200
21/02/04 18:25:50 ||25cdb2ea-8dcf-44ae-919b-b7a5a58c26b7|audit|00.0.00.00|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|unavailable|Request method: GET
21/02/04 18:25:50 |||audit|00.0.00.00|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|Response status: 401
21/02/04 18:25:50 ||5856cc3a-0fa2-4a1b-8429-921fcfb370b7|audit|00.0.00.00|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/login.html?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|unavailable|Request method: GET
21/02/04 18:25:50 ||5856cc3a-0fa2-4a1b-8429-921fcfb370b7|audit|00.0.00.00|knoxauth|anonymous|||authentication|uri|/gateway/knoxsso/knoxauth/login.html?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|
21/02/04 18:25:50 |||audit|00.0.00.00|knoxauth|anonymous|||access|uri|/gateway/knoxsso/knoxauth/login.html?originalUrl=https://Hdp-server-ms01.cal.com:50470/index.html|success|Response status: 200
21/02/04 18:25:50 ||42901bbb-i0f3-4964-b41e-a0d85bc3b247|audit|00.0.00.00|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/hwx-login.css|unavailable|Request method: GET
21/02/04 18:25:50 ||42901bbb-i0f3-4964-b41e-a0d85bc3b247|audit|00.0.00.00|knoxauth|anonymous|||authentication|uri|/gateway/knoxsso/knoxauth/styles/hwx-login.css|success|
21/02/04 18:25:50 |||audit|00.0.00.00|knoxauth|anonymous|||access|uri|/gateway/knoxsso/knoxauth/styles/hwx-login.css|success|Response status: 200 However If i enter the wrong credentials, i get the following message: knox logs:
... View more
Labels:
01-13-2021
12:56 PM
Also on the Ambari UI, we have the following configuration for the service spark(Livy2) And in the Kerberos we have the following config: My understanding is that the value in the service spark(Livy2) conf livy.server.launch.kerberos.principal livy/_HOST@<REALM> is replaced by the value set in the Kerberos page livy.server.launch.kerberos.principal ${livy2-env/livy2_user}/_HOST@${realm} What is the value of livy2-env? Where can i get this info? For livy2_user the value is livy according to the param set in ambari: also please find below the list of principals that are in /etc/security/keytabs/livy.service.keytab and /etc/security/keytabs/spnego.service.keytab on the Livy2 server <HOST1> (or <HOST2>) host: note exemple: <HA_HOST_URL> --> myhaproxy.test.com <HOST1_URL> --> hdp-dev-ms01.test.com <HOST1> --> hdp-dev-ms01 <REALM> --> CORP.MYREALM.COM
... View more
01-13-2021
08:03 AM
HDP-3.0.1.0
Spark2 2.3.0
Zeppelin Notebook 0.8.0
Kerberos 1.10.3-30
Apache Hadoop Multi-Node Kerberized Cluster using a HA proxy.
with the following lines in haproxy.cfg:
listen spark_livy
bind <HA_PROXY_IP>:8999
mode tcp
option tcplog
server spark_livy_1 <HOST1_URL>:8999 check
server spark_livy_2 <HOST2_URL>:8999 check
HI,
When we try to run a query in zeppelin using LIVY2 interpretor, we have the following message that pop up:
org.springframework.web.client.RestClientException: Error running rest call; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://<HA_PROXY_URL>:8999/sessions/768/statements": <HA_PROXY_URL>:8999 failed to respond; nested exception is org.apache.http.NoHttpResponseException: <HA_PROXY_URL>:8999 failed to respond
On the LIVY2 server logs (livy-livy-server.out) we have the following line that appears:
21/01/12 18:53:22 INFO InteractiveSession: Stopping InteractiveSession 768...
21/01/12 18:53:23 WARN RpcDispatcher: [ClientProtocol] Closing RPC channel with 1 outstanding RPCs.
21/01/12 18:53:23 INFO InteractiveSession: Stopped InteractiveSession 768.
21/01/13 10:28:47 WARN AuthenticationFilter: AuthenticationToken ignored: org.apache.hadoop.security.authentication.util.SignerException: Invalid signature
On the logs (zeppelin-interpreter-livy2-zeppelin-<host>.log) in the zeppelin host the following line appears:
ERROR [2021-01-13 12:01:48,906] ({pool-2-thread-8} LivySharedInterpreter.java[interpret]:83) - Fail to interpret:print('hello mister ')
org.apache.zeppelin.livy.LivyException: org.springframework.web.client.RestClientException: Error running rest call; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://<HA_PROXY_URL>:8999/sessions/768/statements": <HA_PROXY_URL>:8999 failed to respond; nested exception is org.apache.http.NoHttpResponseException: <HA_PROXY_URL>:8999 failed to respond
at org.apache.zeppelin.livy.BaseLivyInterpreter.callRestAPI(BaseLivyInterpreter.java:733)
at org.apache.zeppelin.livy.BaseLivyInterpreter.executeStatement(BaseLivyInterpreter.java:581)
at org.apache.zeppelin.livy.BaseLivyInterpreter.interpret(BaseLivyInterpreter.java:393)
at org.apache.zeppelin.livy.LivySharedInterpreter.interpret(LivySharedInterpreter.java:81)
at org.apache.zeppelin.livy.BaseLivyInterpreter.interpret(BaseLivyInterpreter.java:251)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:633)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.web.client.RestClientException: Error running rest call; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://<HA_PROXY_URL>:8999/sessions/768/statements": <HA_PROXY_URL>:8999 failed to respond; nested exception is org.apache.http.NoHttpResponseException: <HA_PROXY_URL>:8999 failed to respond
at org.springframework.security.kerberos.client.KerberosRestTemplate.doExecute(KerberosRestTemplate.java:196)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:580)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:498)
at org.apache.zeppelin.livy.BaseLivyInterpreter.callRestAPI(BaseLivyInterpreter.java:703)
... 15 more
Caused by: org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://<HA_PROXY_URL>:8999/sessions/768/statements": <HA_PROXY_URL>:8999 failed to respond; nested exception is org.apache.http.NoHttpResponseException: <HA_PROXY_URL>:8999 failed to respond
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:633)
at org.springframework.security.kerberos.client.KerberosRestTemplate.doExecuteSubject(KerberosRestTemplate.java:202)
at org.springframework.security.kerberos.client.KerberosRestTemplate.access$100(KerberosRestTemplate.java:67)
at org.springframework.security.kerberos.client.KerberosRestTemplate$1.run(KerberosRestTemplate.java:191)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.springframework.security.kerberos.client.KerberosRestTemplate.doExecute(KerberosRestTemplate.java:187)
... 18 more
Caused by: org.apache.http.NoHttpResponseException: <HA_PROXY_URL>:8999 failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:91)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:53)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:619)
... 24 more
Any idea on how to solve this issue?
Thank you in advance for your help.
note exemple:
<HA_PROXY_IP> --> xx.x.xx.xx
<HA_PROXY_URL> --> myhaproxy.test.com
<HOST1_URL> --> hdp-dev-ms01.test.com
<HOST1> --> hdp-dev-ms01
<HOST2_URL> --> hdp-dev-ms02.test.com
<HOST2> --> hdp-dev-ms02
... View more
Labels:
01-12-2021
12:05 PM
Hi, We have the following alert on ambari from the MapReduce History Server Web UI: We have been receiving alerts today from MapReduce History Server Web UI encountered connection failed or operation timeout. As checked, History Server application didn't stop or restarted during the time of the alert. Below is the alert message we have: History Server Web UI Connection failed to https://<host>:19890 (Execution of 'curl --location-trusted -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/b691e974-6769-43c2-bba1-e486d6007948 -c /var/lib/ambari-agent/tmp/cookies/b691e974-6769-43c2-bba1-e486d6007948 -w '%{http_code}' https://<host>:19890 --connect-timeout 5 --max-time 7 -o /dev/null 1>/tmp/tmptSz439 2>/tmp/tmpb5DaZ0' returned 28. % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 270 100 270 0 0 1522 0 --:--:-- --:--:-- --:--:-- 1522 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0 curl: (28) Operation timed out after 7827 milliseconds with 0 out of -1 bytes received 401)
Cluster: <cluster1>
Host: <host> Could you please help us find why this alert keeps happening? Thank you!
... View more
Labels:
11-24-2020
10:54 AM
Hi i would like to put the HIVE metastores in HA using haproxy load balancing. In the file config haproxy.cfg we already put the Hive HiveServer2 in HA using the following lines: listen hs2_meta_env bind xx.x.xx.xx:9083 mode tcp option tcplog balance first server hs2_prd_meta_1 <node1_url>:9083 check server hs2_prd_meta_2 <node2_url>:9083 check It works for HiveServer2. How can we do the same this time for metastores that are installed on node1_url et node2_url? Thank you for your time.
... View more
Labels:
11-20-2020
10:16 AM
Hi @Scharan thank you for your help. Does this propertie need to be added in zeppelin interpreters or in the spark conf in ambari ?: zeppelin Ambari
... View more
11-20-2020
08:39 AM
Hi, I get the following message when i run a query in zeppelin using the livy2 interpreter: Any idea how to fix this issue? HDP 3.0.1 Ambari 2.7.1.0 Zeppelin Notebook 0.8.0 Spark2 2.3.0
... View more
Labels:
09-04-2020
08:24 AM
Using: HDP 3.0.1 HDFS 3.1.0 NAMENODE HEAP: 84.8% 3.3 GB / 4.0 GB DISK USAGE (DFS USED): 71.20% 63.9 TB / 89.7 TB DISK USAGE (NON DFS USED): 0.74% 676.2 GB / 89.7 TB DISK REMAINING: 28.06% 25.2 TB / 89.7 TB Block Size : 128 MB Any idea how to solve to reduce reduce the name node heap size usage? Thank you
... View more
Labels:
08-19-2020
08:05 AM
Hi, I am currently facing issue with NiFi 1.7.0 (2 nodes), My flows had been running fine since months. Couple of weeks ago we did a downgrading of the nifi machines. We went from CPU 16 RAM 128G to CPU 4 RAM 32G. After the dowgrading we changed parameters to make sure that nifi take into consideration the new size of the machines Initial values: Initial memory allocation: 80G Max memory allocation: 112g New values: Initial memory allocation: 20G Max memory allocation: 26G within nifi we did also change in NIFI Genaral Settings from Maximum timer driven thread count: 64 Maximum event driven thread count: 16 to Maximum timer driven thread count: 2 Maximum event driven thread count: 4 Now some time when we try to load we get an enexpected error occured with 2020-08-18 10:57:51,942 ERROR [NiFi logging handler] org.apache.nifi.StdErr OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000780000000, 218103808, 0) failed; error='Cannot allocate memory' (errno=12) 2020-08-18 10:57:51,956 INFO [NiFi logging handler] org.apache.nifi.StdOut # There is insufficient memory for the Java Runtime Environment to continue. 2020-08-18 10:57:51,957 INFO [NiFi logging handler] org.apache.nifi.StdOut # Native memory allocation (mmap) failed to map 218103808 byt Could you please help me with this issue.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache NiFi