Member since
08-05-2017
30
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2819 | 08-11-2017 11:29 AM |
08-02-2018
04:26 AM
Dear experts, I have a kerberized (using Active Directory) HDP cluster and an external Solr cloud(Not kerberized). I am now trying to configure Ranger audits to point to the external SolrCloud and not able to find any process. Could you please guide me regarding this ? I have followed the process from the below link to setup an external SolrCloud using Rangerscripts and confused how to proceed with kerberization. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/solr_ranger_configure_solrcloud.html It will be really helpful if i can get any documentation/links regarding this process. Thanks
... View more
Labels:
- Labels:
-
Apache Ranger
-
Apache Solr
06-22-2018
02:48 AM
Dear experts, We have installed Ranger with LDAP sync and after few days we have enabled Kerberos using AD. After which the usersync is getting failed with the below error, please suggest 21 Jun 2018 17:47:58 INFO UnixAuthenticationService [main] - Starting User Sync Service!
21 Jun 2018 17:47:58 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.username.regex
21 Jun 2018 17:47:58 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.groupname.regex
21 Jun 2018 17:47:58 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder created
21 Jun 2018 17:47:58 INFO UserGroupSyncConfig [UnixUserSyncThread] - Sleep Time Between Cycle can not be lower than [3600000] millisec. resetting to min value.
21 Jun 2018 17:47:58 INFO UserGroupSync [UnixUserSyncThread] - initializing sink: org.apache.ranger.ldapusersync.process.LdapPolicyMgrUserGroupBuilder
21 Jun 2018 17:47:58 WARN NativeCodeLoader [UnixUserSyncThread] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21 Jun 2018 17:47:59 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.username.regex
21 Jun 2018 17:47:59 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.groupname.regex
21 Jun 2018 17:47:59 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder created
21 Jun 2018 17:47:59 INFO UserGroupSync [UnixUserSyncThread] - initializing source: org.apache.ranger.ldapusersync.process.LdapDeltaUserGroupBuilder
21 Jun 2018 17:47:59 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder initialization started
21 Jun 2018 17:47:59 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder initialization completed with -- ldapUrl: ldap://XXXXX:389, ldapBindDn: CN=XXXX,OU=XXXX,DC=XXX,DC=XXX, ldapBindPassword: ***** , ldapAuthenticationMechanism: simple, searchBase: DC=aero,DC=local, userSearchBase: [OU=XXX,DC=XX,DC=XXX], userSearchScope: 2, userObjectClass: user, userSearchFilter: , extendedUserSearchFilter: null, userNameAttribute: sAMAccountName, userSearchAttributes: [uSNChanged, sAMAccountName, modifytimestamp], userGroupNameAttributeSet: null, pagedResultsEnabled: true, pagedResultsSize: 500, groupSearchEnabled: true, groupSearchBase: [XXXX], groupSearchScope: 2, groupObjectClass: group, groupSearchFilter: , extendedGroupSearchFilter: (&null(|(member={0})(member={1}))), extendedAllGroupsSearchFilter: null, groupMemberAttributeName: member, groupNameAttribute: name, groupSearchAttributes: [uSNChanged, name, member, modifytimestamp], groupUserMapSyncEnabled: true, groupSearchFirstEnabled: false, userSearchEnabled: false, ldapReferral: ignore
21 Jun 2018 17:47:59 INFO UserGroupSync [UnixUserSyncThread] - Begin: initial load of user/group from source==>sink
21 Jun 2018 17:47:59 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - LdapDeltaUserGroupBuilder updateSink started
21 Jun 2018 17:47:59 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - Performing user search first
21 Jun 2018 17:47:59 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - extendedUserSearchFilter = (&(objectclass=user)(|(uSNChanged>=0)(modifyTimestamp>=19700101120000Z)))
21 Jun 2018 17:47:59 INFO LdapDeltaUserGroupBuilder [UnixUserSyncThread] - uSNChangedVal = 225739and currentDeltaSyncTime = 225739
21 Jun 2018 17:47:59 ERROR LdapPolicyMgrUserGroupBuilder [UnixUserSyncThread] - Failed to add User :
com.sun.jersey.api.client.UniformInterfaceException: POST http://10.1.1.5:6080/service/users/default returned a response status of 401 Unauthorized
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:686)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:568)
at org.apache.ranger.ldapusersync.process.LdapPolicyMgrUserGroupBuilder.getMUser(LdapPolicyMgrUserGroupBuilder.java:672)
at org.apache.ranger.ldapusersync.process.LdapPolicyMgrUserGroupBuilder.access$500(LdapPolicyMgrUserGroupBuilder.java:73)
at org.apache.ranger.ldapusersync.process.LdapPolicyMgrUserGroupBuilder$6.run(LdapPolicyMgrUserGroupBuilder.java:645)
at org.apache.ranger.ldapusersync.process.LdapPolicyMgrUserGroupBuilder$6.run(LdapPolicyMgrUserGroupBuilder.java:641)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.ranger.ldapusersync.process.LdapPolicyMgrUserGroupBuilder.addMUser(LdapPolicyMgrUserGroupBuilder.java:641)
at org.apache.ranger.ldapusersync.process.LdapPolicyMgrUserGroupBuilder.addOrUpdateUser(LdapPolicyMgrUserGroupBuilder.java:273)
at org.apache.ranger.ldapusersync.process.LdapDeltaUserGroupBuilder.getUsers(LdapDeltaUserGroupBuilder.java:468)
at org.apache.ranger.ldapusersync.process.LdapDeltaUserGroupBuilder.updateSink(LdapDeltaUserGroupBuilder.java:311)
at org.apache.ranger.usergroupsync.UserGroupSync.run(UserGroupSync.java:58)
at java.lang.Thread.run(Thread.java:748)
21 Jun 2018 17:47:59 ERROR LdapPolicyMgrUserGroupBuilder [UnixUserSyncThread] - Failed to add portal user
... View more
Labels:
- Labels:
-
Apache Ranger
06-22-2018
02:03 AM
@Geoffrey Shelton Okot thanks for you reply. I have used the same database for reinstalling the Ranger. I was assuming that Ranger will delete all of its tables during the uninstallation
... View more
06-20-2018
06:33 PM
Dear experts, Due to some problem, we have reinstalled the Ranger and KMS services. However , if i open the ranger UI , it is still showing the old policies and even the ranger admin passwords are not getting changed. Kindly help on what might be the issue.
... View more
Labels:
- Labels:
-
Apache Ranger
06-17-2018
02:20 PM
Dear experts, We are enabling kerberos in our cluster with integrating it to Active Directory. The Kerberos has been enabled however during the service restarts, all the services are being failed with the below error, could you please assist on this ? /usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa-hdp@HADOOP.LOCAL;' returned 1. kinit: Preauthentication failed while getting initial credentials Zookeeper logs: 2018-06-17 14:02:52,217 - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2018-06-17 14:02:52,218 - INFO [main:QuorumPeerMain@127] - Starting quorum peer
2018-06-17 14:02:52,229 - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2018-06-17 14:02:52,814 - ERROR [main:QuorumPeerMain@89] - Unexpected exception, exiting abnormally
java.io.IOException: Could not configure server because SASL configuration did not allow the ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: Pre-authentication information was invalid (24)
at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:207)
at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:130)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) Because of this issues, it is not allowing us to disable the kerberos now. Kindly help on this. Thanks, Chiranjeevi
... View more
05-30-2018
02:52 PM
@Geoffrey Shelton Okot Thanks for sharing the steps, however i tried everything mentioned which haven't worked for me. So i have reset the ambari and did a fresh install of everything. Since my cluster did not had any data , this procedure worked. However it would have been very hard if it had data. I even tried by manually starting the metrics monitor by "/usr/sbin/ambari-metrics-monitor start" , this had created the pid file but the ambari server appeared not recognizing it. Hoping there might be some work-around which i might have missed. Thanks alot for your time 🙂
... View more
05-29-2018
07:55 AM
@Geoffrey Shelton Okot All my VMs are running SLES 11 SP 4 and from ambari , the start-all option is disabled. It is showing heartbeat lost.
... View more
05-29-2018
06:02 AM
Dear experts, I have installed a 3 node HDP cluster on Azure. Due to some problem, all the VMs got restarted abruptly, after this i have manually restarted all the ambari agents and ambari server. I am able to see the agents and server is running fine but all of the services are in "Lost heartbeat"state. Could you please assist ? Below is the log file of a Ambari agent on master node: INFO 2018-05-29 05:57:21,051 Controller.py:311 - Building heartbeat message
INFO 2018-05-29 05:57:21,053 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2018-05-29 05:57:21,149 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2018-05-29 05:57:21,149 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2018-05-29 05:57:21,995 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /, /mnt/resource
INFO 2018-05-29 05:57:21,996 Controller.py:320 - Sending Heartbeat (id = 204)
INFO 2018-05-29 05:57:22,039 Controller.py:333 - Heartbeat response received (id = 205)
INFO 2018-05-29 05:57:22,040 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2018-05-29 05:57:22,040 Controller.py:380 - Updating configurations from heartbeat
INFO 2018-05-29 05:57:22,040 Controller.py:389 - Adding cancel/execution commands
INFO 2018-05-29 05:57:22,040 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2018-05-29 05:57:31,941 Controller.py:482 - Wait for next heartbeat over
WARNING 2018-05-29 05:57:46,099 base_alert.py:138 - [Alert][namenode_cpu] Unable to execute alert. [Alert][namenode_cpu] Unable to extract JSON from JMX response
WARNING 2018-05-29 05:57:46,108 base_alert.py:138 - [Alert][datanode_health_summary] Unable to execute alert. [Alert][datanode_health_summary] Unable to extract JSON from JMX response
WARNING 2018-05-29 05:57:46,119 base_alert.py:138 - [Alert][namenode_service_rpc_processing_latency_hourly] Unable to execute alert. Couldn't define hadoop_conf_dir: argument of type 'NoneType' is not iterable
WARNING 2018-05-29 05:57:46,123 base_alert.py:138 - [Alert][namenode_client_rpc_queue_latency_hourly] Unable to execute alert. Couldn't define hadoop_conf_dir: argument of type 'NoneType' is not iterable
WARNING 2018-05-29 05:57:46,136 base_alert.py:138 - [Alert][namenode_client_rpc_processing_latency_hourly] Unable to execute alert. Couldn't define hadoop_conf_dir: argument of type 'NoneType' is not iterable
WARNING 2018-05-29 05:57:46,145 base_alert.py:138 - [Alert][namenode_directory_status] Unable to execute alert. [Alert][namenode_directory_status] Unable to extract JSON from JMX response
WARNING 2018-05-29 05:57:46,267 base_alert.py:138 - [Alert][namenode_service_rpc_queue_latency_hourly] Unable to execute alert. Couldn't define hadoop_conf_dir: argument of type 'NoneType' is not iterable
WARNING 2018-05-29 05:57:46,280 base_alert.py:138 - [Alert][yarn_resourcemanager_cpu] Unable to execute alert. [Alert][yarn_resourcemanager_cpu] Unable to extract JSON from JMX response
WARNING 2018-05-29 05:57:46,282 base_alert.py:138 - [Alert][yarn_resourcemanager_rpc_latency] Unable to execute alert. [Alert][yarn_resourcemanager_rpc_latency] Unable to extract JSON from JMX response
WARNING 2018-05-29 05:57:46,296 base_alert.py:138 - [Alert][smartsense_gateway_status] Unable to execute alert. [Alert][smartsense_gateway_status] Unable to extract JSON from JMX response
WARNING 2018-05-29 05:57:46,298 logger.py:71 - Cannot find the stack name in the command. Stack tools cannot be loaded
WARNING 2018-05-29 05:57:46,300 base_alert.py:138 - [Alert][smartsense_long_running_bundle] Unable to execute alert. [Alert][smartsense_long_running_bundle] Unable to extract JSON from JMX response
WARNING 2018-05-29 05:57:46,298 logger.py:71 - Cannot find the stack name in the command. Stack tools cannot be loaded
INFO 2018-05-29 05:57:46,303 logger.py:75 - call[('ambari-python-wrap', None, 'versions')] {}
INFO 2018-05-29 05:57:46,303 logger.py:75 - call[('ambari-python-wrap', None, 'versions')] {}
INFO 2018-05-29 05:57:46,712 logger.py:75 - Pid file /var/run/ambari-metrics-monitor/ambari-metrics-monitor.pid is empty or does not exist
INFO 2018-05-29 05:57:46,712 logger.py:75 - Pid file /var/run/ambari-metrics-monitor/ambari-metrics-monitor.pid is empty or does not exist
ERROR 2018-05-29 05:57:46,713 script_alert.py:123 - [Alert][ams_metrics_monitor_process] Failed with result CRITICAL: ['Ambari Monitor is NOT running on hdpmaster']
ERROR 2018-05-29 05:57:46,713 script_alert.py:123 - [Alert][ams_metrics_monitor_process] Failed with result CRITICAL: ['Ambari Monitor is NOT running on hdpmaster']
... View more
Labels:
- Labels:
-
Apache Ambari
12-22-2017
04:17 AM
Dear experts, I am running HDP 2.4 on AWS cluster. I am trying to decommission a datanode but it appears the status is stuck at "Decommissioning" for long time. If i try to delete the host the below error message is being displayed. Could you please help ?
... View more
Labels:
- Labels:
-
Apache Hadoop
11-23-2017
09:39 PM
Thanks @Jan Dombrowicz ,i will use spark 1 with the zeppelin.
... View more
11-23-2017
06:37 AM
Dear experts, I am running HDP 2.5 sandbox on Azure with Zeppelin 0.6.0.2.5 and Spark 2.0.0.2.5. Zeppelin is able to recognize only the Spark 1 but not 2, could you please help on this ? Are the versions incompatible ? I tried creating a new interpreter as spark2 as in attached images but still not working. Below are the installed spark versions in my sandbox [root@sandbox spark2]# spark-shell --version
Multiple versions of Spark are installed but SPARK_MAJOR_VERSION is not set
Spark1 will be picked by default
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
Type --help for more information.
[root@sandbox spark2]# export SPARK_MAJOR_VERSION=2
[root@sandbox spark2]# spark-submit --version
SPARK_MAJOR_VERSION is set to 2, using Spark2
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.0.0.2.5.0.0-1245
/_/
Branch HEAD
Compiled by user jenkins on 2016-08-26T02:52:23Z
Revision c036d8e15666e914cd6aaf90ada0390619828b10
Url git@github.com:hortonworks/spark2.git
Type --help for more information.
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
10-30-2017
09:18 AM
@Jay SenSharma I can see root is the user running ambari-server and owns '/var/run/ambari-server' directory. Please find below /var/run/ambari-server:
total 12
drwxr-xr-x 2 root root 4096 Oct 25 2016 bootstrap
drwxr-xr-x 1 root root 4096 Dec 19 2016 stack-recommendations
-rw-r--r-- 1 root root 6 Oct 30 08:17 ambari-server.pid
[root@sandbox ~]# ps -ef | grep ambari-server
root 10950 1 12 08:17 ? 00:06:52 /usr/lib/jvm/java/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -Dsun.zip.disableMemoryMapping=true -Xms512m -Xmx2048m -XX:MaxPermSize=128m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -Xms512m -Xmx2048m -XX:MaxPermSize=128m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -cp /etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/postgresql-jdbc.jar org.apache.ambari.server.controller.AmbariServer
root 16517 16342 0 09:12 pts/0 00:00:00 grep ambari-server However, i tried to follow the second instruction but unfortunately i am getting the below error: [root@sandbox ambari-server]# mv stack-recommendations ~ mv: cannot remove `stack-recommendations/2': Invalid argument mv: cannot remove `stack-recommendations/3': Invalid argument mv: cannot remove `stack-recommendations/1': Invalid argument mv: cannot remove `stack-recommendations/6': Invalid argument mv: cannot remove `stack-recommendations/5': Invalid argument mv: cannot remove `stack-recommendations/4': Invalid argument
... View more
10-30-2017
09:04 AM
Hi everyone, I am running HDP 2.5 sandbox in Azure cloud. After logging into ambari as admin and trying to add a service (Eg: Nifi) , i am getting the below error. Could you please help ? Error message: Error occured during stack advisor command invocation: Unable to delete directory /var/run/ambari-server/stack-recommendations/2. Below are few details of the sandbox: [root@sandbox ~]# sandbox-version
Sandbox information:
Created on: 25_10_2016_08_11_26 for
Hadoop stack version: Hadoop 2.7.3.2.5.0.0-1245
Ambari Version: 2.4.0.0-1225
Ambari Hash: 59175b7aa1ddb74b85551c632e3ce42fed8f0c85
Ambari build: Release : 1225
Java version: 1.8.0_111
OS Version: CentOS release 6.8 (Final)
[root@sandbox ~]# ps -ef | grep ambari-server
root 1307 0 13 07:25 ? 00:06:46 /usr/lib/jvm/java/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -Dsun.zip.disableMemoryMapping=true -Xms512m -Xmx2048m -XX:MaxPermSize=128m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -cp /etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/postgresql-jdbc.jar org.apache.ambari.server.controller.AmbariServer
root 10677 10483 0 08:16 pts/1 00:00:00 grep ambari-server I have tried manually deleting the folders but no luck, [root@sandbox stack-recommendations]# ls -ltr
total 24
drwxr-xr-x 1 root root 4096 Oct 20 03:54 2
drwxr-xr-x 1 root root 4096 Oct 25 03:54 3
drwxr-xr-x 1 root root 4096 Oct 26 06:29 1
drwxr-xr-x 1 root root 4096 Oct 30 08:29 4
drwxr-xr-x 1 root root 4096 Oct 30 08:29 5
drwxr-xr-x 1 root root 4096 Oct 30 08:29 6
[root@sandbox stack-recommendations]# pwd
/var/run/ambari-server/stack-recommendations
[root@sandbox stack-recommendations]# rmdir *
rmdir: failed to remove `1': Invalid argument
rmdir: failed to remove `2': Invalid argument
rmdir: failed to remove `3': Invalid argument
rmdir: failed to remove `4': Invalid argument
rmdir: failed to remove `5': Invalid argument
rmdir: failed to remove `6': Invalid argument
... View more
10-18-2017
03:18 AM
@Jay SenSharma Thanks for helping out. Restarting the shellinabox service in the sandbox helped me to access the webssh.
... View more
10-18-2017
03:02 AM
@Jay SenSharma Command "ssh root@127.0.0.1-p 2222" worked for me to use hadoop commands. However, i am still not able to access web ssh client, please find the details below once i logged into the docker container. [root@sandbox ~]# service shellinaboxd status
shellinaboxd (pid 2717) is running...
[root@sandbox ~]# netstat -tulpn | grep 4200
[root@sandbox ~]# sandbox-version
Sandbox information:
Created on: 25_10_2016_08_11_26 for
Hadoop stack version: Hadoop 2.7.3.2.5.0.0-1245
Ambari Version: 2.4.0.0-1225
Ambari Hash: 59175b7aa1ddb74b85551c632e3ce42fed8f0c85
Ambari build: Release : 1225
Java version: 1.8.0_111
OS Version: CentOS release 6.8 (Final) I am seeing port 4200 is not listening and shellinabox service is running fine. Please help me on this.
... View more
10-17-2017
04:04 AM
Dear experts, I am running HDP 2.5 sandbox on azure cloud. I am facing the below problems, request your help 1) I am not able to ssh to the sandbox public ip using root user with password hadoop. I am receiving access denied errors Using username "root".
Using keyboard-interactive authentication.
Password:
Access denied
Using keyboard-interactive authentication.
Password: 2) I am not able to find hdfs user and hdfs commands in the sandbox. I have used the login which i created during Azure VM setup login as: chirunimmala
Using keyboard-interactive authentication.
Password:
Last login: Tue Oct 17 03:10:01 2017 from 103.251.48.2
[chirunimmala@sandbox ~]$ sudo su - hdfs
[sudo] password for chirunimmala:
su: user hdfs does not exist
[chirunimmala@sandbox ~]$ hdfs dfs -ls /
-bash: hdfs: command not found
[chirunimmala@sandbox ~]$ 3) I am not able to use the in built webssh client. I have opened the port 4200 in my NSG in Azure but no luck. I am receiving site cannot be reached error.
... View more
10-17-2017
03:48 AM
@Jay SenSharma I am facing this similar problem, i am running HDP 2.5 on Azure and opened the port 4200 in NSG. Still i am not able to use the in built webssh client. I tried to find the service "shellinaboxd" but not able to find it. Can you please help ? [root@sandbox ~]# service shellinaboxd status
Redirecting to /bin/systemctl status shellinaboxd.service
● shellinaboxd.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
[root@sandbox ~]# service shellinaboxd start
Redirecting to /bin/systemctl start shellinaboxd.service
Failed to start shellinaboxd.service: Unit shellinaboxd.service failed to load: No such file or directory.
... View more
08-11-2017
11:29 AM
2 Kudos
I am able to find the solution, I was running Ranger admin on host1 and Hiveserver on host2 and i created the admin OS account in host1 but not in host2(where hive server is running). Creating the admin account and its group (hdpmasters) in host2 resolved this issue. I guess Ambari views might need the OS account/ Group to be present on the server where the service(being accessed by view) is installed.
... View more
08-11-2017
11:11 AM
@Jay SenSharma Will the below findings help, If i add admin user instead of its group hdpmasters in the Ranger hive policy. Both of the errors are not being shown anymore in the Hive view. The problem is only if i use its group 'hdpmasters' in the policy. Please find below [ec2-user@XXXXXXX ~]$ hdfs dfs -ls /user Found 9 items
drwxr-xr-x - admin hdpmasters 0 2017-08-11 07:02 /user/admin
[ec2-user@XXXXXX ~]$ id admin
uid=1012(admin) gid=1012(admin) groups=1012(admin),1001(hdpmasters) Why it is only working if i give 'admin' directly but not the group ?
... View more
08-11-2017
10:50 AM
@Jay SenSharma I have already tried setting it to '*' but still not working.
... View more
08-11-2017
10:46 AM
Adding to my question, I have added admin OS account into a group 'hdpmasters' and i used this group while configuring Hive Ranger policy. If i use 'admin' instead of group in the policy configuration, strangely i am not able to see the error "FAILED:HiveAccessControlExceptionPermission denied: user [admin] does not have [USE] privilege on [null]" anymore(I can view the default database in Ambari view). Can i know why this is happening ?
... View more
08-11-2017
10:09 AM
Dear experts, I am running HDP 2.4 on EC2 cloud. Recently i have installed Ranger and integrated Hive. When i am trying to use the Ambari Hive view using the admin account, i am receiving the below two errors. Could you please help ? i am attaching few screenshots showing the configurations required. admin-hdfs-policy.png admin-hive-policy.png proxy.png admin OS account is working fine: ---------------------------------------------- [ec2-user@XXXXXXXXX ~]$ id admin uid=1012(admin) gid=1012(admin) groups=1012(admin),1001(hdpmasters) [ec2-user@XXXXXXXX ~]$ hdfs dfs -ls /user Found 9 items drwxr-xr-x - admin hdfs 0 2017-08-11 04:35 /user/admin
[admin@XXXXXXXX ~]$ hdfs dfs -ls /user/admin Found 2 items drwxr-xr-x - admin hdfs 0 2017-08-11 05:33 /user/admin/.hiveJars drwxr-xr-x - admin hdfs 0 2017-08-11 04:35 /user/admin/testing
[admin@XXXXXX ~]$ hive
WARNING: Use "yarn jar" to launch YARN applications.
Logging initialized using configuration in file:/etc/hive/2.4.3.0-227/0/hive-log4j.properties
hive> show databases; OK default Time taken: 1.044 seconds, Fetched: 1 row(s) Errors: ----------------------------- Failed to execute statement: show databases like '*' org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [admin] does not have [USE] privilege on [null]
E090 HDFS020 Could not write file /user/admin/hive/jobs/hive-job-5-2017-08-11_05-43/query.hql [HdfsApiException] org.apache.ambari.view.utils.hdfs.HdfsApiException: HDFS020 Could not write file /user/admin/hive/jobs/hive-job-5-2017-08-11_05-43/query.hql
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Unauthorized connection for super-user: root from IP XXXXXXXXX
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
-
Apache Ranger
08-06-2017
11:24 AM
Thanks alot, increasing the stack size as suggested for nfs gateway helped. Thanks again, you have resolved all my issues today 🙂
... View more
08-06-2017
06:30 AM
I have tried changing ulimit as suggested and restarted the gateway but still no luck. I dont see any .log file but i am ale to get few details as below, /var/log/hadoop/root nfs3_jsvc.out
------------------------- A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007f7b0a23bb7c, pid=19469, tid=140166720608064
#
# JRE version: (8.0_77-b03) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# j java.lang.Object.<clinit>()V+0
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid19469.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
hadoop-hdfs-nfs3-XXXXXXX.out ------------------------------------------------------- ulimit -a for privileged nfs user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63392
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
... View more
08-06-2017
05:58 AM
Dear experts, I am running HDP 2.4.3 with Ambari 2.4 on AWS EC2 instances running on Red Hat Enterprise Linux Server release 7.3 (Maipo). Whenever i start the NFSGATEWAY service on a host , it is automatically getting stopped after sometime. Could you please assist me on this ? Even i try to kill the existing nfs3 process and restart the service, the issue still persists. Please find few details below, ps -ef | grep nfs3 ---------------------------------------------------------- root 9766 1 0 01:42 pts/0 00:00:00 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /usr/hdp/current/hadoop-client/conf:/usr/hdp/2.4.3.0-227/hadoop/lib/*:/usr/hdp/2.4.3.0-227/hadoop/.//*:/usr/hdp/2.4.3.0-227/hadoop-hdfs/./:/usr/hdp/2.4.3.0-227/hadoop-hdfs/lib/*:/usr/hdp/2.4.3.0-227/hadoop-hdfs/.//*:/usr/hdp/2.4.3.0-227/hadoop-yarn/lib/*:/usr/hdp/2.4.3.0-227/hadoop-yarn/.//*:/usr/hdp/2.4.3.0-227/hadoop-mapreduce/lib/*:/usr/hdp/2.4.3.0-227/hadoop-mapreduce/.//*::/usr/hdp/2.4.3.0-227/tez/*:/usr/hdp/2.4.3.0-227/tez/lib/*:/usr/hdp/2.4.3.0-227/tez/conf:/usr/hdp/2.4.3.0-227/tez/*:/usr/hdp/2.4.3.0-227/tez/lib/*:/usr/hdp/2.4.3.0-227/tez/conf -Xmx1024m -Dhdp.version=2.4.3.0-227 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.4.3.0-227/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.4.3.0-227 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-ip-10-0-0-223.ap-south-1.compute.internal.log -Dhadoop.home.dir=/usr/hdp/2.4.3.0-227/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter systemctl status rpcbind --------------------------------------------------
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
Active: active (running) since Sun 2017-08-06 01:29:31 EDT; 18min ago
Main PID: 6164 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─6164 /sbin/rpcbind -w
... View more
- Tags:
- nfsgateway
08-06-2017
05:21 AM
Thank you, disabling the certificates as mentioned in https://access.redhat.com/articles/2039753#controlling-certificate-verification-7 helped
... View more
08-05-2017
05:59 AM
Dear experts, I have installed HDP 2.4.3 on AWS EC2 instances and i am facing a problem that, the ambari-agent and server both are running fine but agent is not able to make connections to server. I have tried all possibilities suggested in the HDP forums but nothing worked. Could you please help me on this ? Below are some details Versions ------------------ Red Hat Enterprise Linux Server release 7.3 (Maipo) on AWS EC2 Ambari 2.4.3.0
HDP 2.4.3 Python 2.7.5 (default, May 3 2017, 07:55:04) ambari-agent service running
---------------------------
root 3313 1 0 01:22 pts/0 00:00:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start
root 3321 3313 0 01:22 pts/0 00:00:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/main.py start ambari-env.sh file
-----------------------
AMBARI_PASSPHRASE="DEV" export PATH=$PATH:/var/lib/ambari-agent export PYTHONPATH=$PYTHONPATH:/usr/lib/python2.6/site-packages
ambari-agent.ini
------------------------ [security]
keysdir=/var/lib/ambari-agent/keys server_crt=ca.crt
passphrase_env_var_name=AMBARI_PASSPHRASE ssl_verify_cert=0 JDK versions in ambari-server.properties ---------------------------------- java.home=/usr/jdk64/jdk1.8.0_77 java.releases=jdk1.8,jdk1.7 jdk1.7.desc=Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
jdk1.8.desc=Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8 ambari-agent log
-------------------------------- INFO 2017-08-05 01:14:38,849 HeartbeatHandlers.py:115 - Stop event received
INFO 2017-08-05 01:14:38,849 NetUtil.py:125 - Stop event received
INFO 2017-08-05 01:14:38,849 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-05 01:14:38,850 ExitHelper.py:67 - Cleanup finished, exiting with code:0
INFO 2017-08-05 01:14:39,504 main.py:223 - Agent died gracefully, exiting.
INFO 2017-08-05 01:14:39,505 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-05 01:18:09,819 main.py:90 - loglevel=logging.INFO
INFO 2017-08-05 01:18:09,819 main.py:90 - loglevel=logging.INFO
INFO 2017-08-05 01:18:09,819 main.py:90 - loglevel=logging.INFO
INFO 2017-08-05 01:18:09,820 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-08-05 01:18:09,822 DataCleaner.py:120 - Data cleanup started
INFO 2017-08-05 01:18:09,826 DataCleaner.py:122 - Data cleanup finished
INFO 2017-08-05 01:18:09,853 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-08-05 01:18:09,856 main.py:349 - Connecting to Ambari server at https://XXXXXXX:8440 (XXXXXX)
INFO 2017-08-05 01:18:09,856 NetUtil.py:65 - Connecting to https://XXXXXXXXXXXXXXXXXXXXXXXX:8440/ca
ERROR 2017-08-05 01:18:09,918 NetUtil.py:91 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-08-05 01:18:09,919 NetUtil.py:92 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2017-08-05 01:18:09,921 NetUtil.py:119 - Server at https://XXXXXXXXXXXXXXX:8440 is not reachable, sleeping for 10 seconds...
... View more
Labels:
- Labels:
-
Apache Ambari