Member since
12-04-2018
10
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5273 | 05-28-2018 04:34 PM | |
4028 | 11-03-2017 07:37 PM |
05-28-2018
04:34 PM
1 Kudo
I solved the issue in two steps: 1) Adjusting Superset configuration: # Set authentication method id for LDAP
AUTH_TYPE=2
# Set user login to query LDAP server
AUTH_LDAP_BIND_USER='CN=user_ldap,OU=Hadoop,OU=Service Accounts,DC=domain,DC=com,DC=ar'
AUTH_LDAP_BIND_PASSWORD='12345678Abc'
# Set basic specifications to get information from LDAP server
AUTH_LDAP_SEARCH='DC=domain,DC=com,DC=ar'
AUTH_LDAP_SERVER='ldap://server.domain.com.ar'
AUTH_LDAP_UID_FIELD='sAMAccountName'
AUTH_LDAP_USE_TLS=False
AUTH_LDAP_USERNAME_FORMAT='%s'
# I had to comment on this option to make the login works only using the UID
###AUTH_LDAP_APPEND_DOMAIN='domain.com.ar'
# Enable automatic user registration, then you just have to manually change the assigned profile
AUTH_ROLE_PUBLIC='Public'
AUTH_USER_REGISTRATION=True
AUTH_USER_REGISTRATION_ROLE='Public'
2) Adjusting the SQLAlchemy URI to get access to my kerberized cluster: hive://hive-server.domain.com.ar:10000/hive-database?auth=KERBEROS&kerberos_service_name=hive *** I had to get kerberos ticket manually to make it work: kinit -k -t /etc/security/keytabs/superset.headless.keytab superset-hadoop_desarrollo
... View more
05-11-2018
10:12 PM
We recently installed Superset on our kerberized HDP Cluster (2.6.3). We had no problem adding a connection to Oracle database but we could not make it work with Hive. Next I show the different connections strings we tested and the returned errors: 1) hive://SERVER-FQDN:10000/HIVEDB;principal=hive/SERVER-FQDN@REALM ERROR: {"error": "Connection failed!\n\nThe error message returned was:\nPassword should be set if and only if in LDAP or CUSTOM mode; Remove password or use one of those modes"} 2) hive://SERVER-FQDN:10000/HIVEDB ERROR: {"error": "Connection failed!\n\nThe error message returned was:\nBad status: 3 (b'Unsupported mechanism type PLAIN')"} Any body knows what we are doing wrong? Is needed to configure LDAP access to Superset? How? And another issue that burns our heads, we never could set log level to DEBUG, it only shows info messages. We will be very grateful if someone helps us solve it, it seems that we are working blindly.
... View more
Labels:
- Labels:
-
Apache Hive
12-12-2017
09:54 PM
@Nikhil Silsarma I'm working with kafka 0.10.1 (HDP 2.6.3) I don't receive any errors from the console consumer or the Nifi processor. Not even using the "--new-consumer" option. Tell me if you need more details.
... View more
12-07-2017
03:56 PM
Hi Arun, did you solved the issue? I'm having the same problem when trying to use nifi processors ConsumeKafka and ConsumeKafka_0_10 to read messages from kafka 0.10.1 (HDP 2.6 without kerberos). I did tests with different parameter settings and the processors starts without error but never receives the messages. I had same results executing the kafka-console-consumer shell with bootstrap-server parameter even commenting out commands related to Kerberos. Any suggestion?
... View more
12-07-2017
03:47 PM
Hi Eric, did you solved the issue? I'm having the same problem when trying to use processors ConsumeKafka and ConsumeKafka_0_10 to read messages from kafka 0.10.1 (HDP 2.6 without kerberos). I did tests with different parameter settings and the processors starts without error but never receives the messages. I had same results executing the kafka-console-consumer shell with bootstrap-server parameter. Can you help me? Any idea of what's its going on?
... View more
11-03-2017
07:37 PM
3 Kudos
Marcelo,
I just resolved the same issue, checking the error logs (atlas.yyyymmdd-HHMISS.err) I found this:
Exception in thread "main" org.apache.atlas.exception.AtlasBaseException: EmbeddedServer.Start: failed!
at org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:95)
at org.apache.atlas.Atlas.main(Atlas.java:118)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:92)
... 1 more
log4j:WARN No appenders could be found for logger (org.eclipse.jetty.servlet.listener.ELContextCleaner).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
When I saw the message "Caused by: java.net.BindException: Address already in use" I realized that I had more than one instance of the Metadata Server running on host:
# ps -fea|grep atlas
atlas 11443 1 0 Oct31 ? 00:29:58 /usr/jdk64/jdk1.8.0_77/bin/java -Datlas.log.dir=/usr/hdp/2.6.3.0-235/atlas/logs -Datlas.log.file=application.log -Datlas.home=/usr/hdp/2.6.3.0-235/atlas -Datlas.conf=/usr/hdp/2.6.3.0-235/atlas/conf -Xms2048m -Xmx2048m -XX:MaxNewSize=600m -XX:MetaspaceSize=100m -XX:MaxMetaspaceSize=512m -server -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+PrintTenuringDistribution -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/atlas_server.hprof -Xloggc:/gc-worker.log -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1m -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps -Dlog4j.configuration=atlas-log4j.xml -Djava.net.preferIPv4Stack=true -server -classpath /usr/hdp/2.6.3.0-235/atlas/conf:/usr/hdp/2.6.3.0-235/atlas/server/webapp/atlas/WEB-INF/classes:/usr/hdp/2.6.3.0-235/atlas/server/webapp/atlas/WEB-INF/lib/*:/usr/hdp/2.6.3.0-235/atlas/libext/*:/etc/hbase/conf org.apache.atlas.Atlas -app /usr/hdp/2.6.3.0-235/atlas/server/webapp/atlas
atlas 22076 1 0 Nov01 ? 00:24:03 /usr/jdk64/jdk1.8.0_77/bin/java -Datlas.log.dir=/var/log/atlas -Datlas.log.file=application.log -Datlas.home=/usr/hdp/2.6.3.0-235/atlas -Datlas.conf=/usr/hdp/current/atlas-server/conf -Xms2048m -Xmx2048m -XX:MaxNewSize=600m -XX:MetaspaceSize=100m -XX:MaxMetaspaceSize=512m -server -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+PrintTenuringDistribution -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/atlas/atlas_server.hprof -Xloggc:/var/log/atlas/gc-worker.log -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1m -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps -Dlog4j.configuration=atlas-log4j.xml -classpath /usr/hdp/current/atlas-server/conf:/usr/hdp/current/atlas-server/server/webapp/atlas/WEB-INF/classes:/usr/hdp/current/atlas-server/server/webapp/atlas/WEB-INF/lib/*:/usr/hdp/2.6.3.0-235/atlas/libext/*:/etc/hbase/conf org.apache.atlas.Atlas -app /usr/hdp/current/atlas-server/server/webapp/atlas
atlas 23317 1 15 16:01 ? 00:01:05 /usr/jdk64/jdk1.8.0_77/bin/java -Datlas.log.dir=/var/log/atlas -Datlas.log.file=application.log -Datlas.home=/usr/hdp/2.6.3.0-235/atlas -Datlas.conf=/usr/hdp/current/atlas-server/conf -Xms2048m -Xmx2048m -XX:MaxNewSize=600m -XX:MetaspaceSize=100m -XX:MaxMetaspaceSize=512m -server -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+PrintTenuringDistribution -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/atlas/atlas_server.hprof -Xloggc:/var/log/atlas/gc-worker.log -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1m -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps -Dlog4j.configuration=atlas-log4j.xml -classpath /usr/hdp/current/atlas-server/conf:/usr/hdp/current/atlas-server/server/webapp/atlas/WEB-INF/classes:/usr/hdp/current/atlas-server/server/webapp/atlas/WEB-INF/lib/*:/usr/hdp/2.6.3.0-235/atlas/libext/*:/etc/hbase/conf org.apache.atlas.Atlas -app /usr/hdp/current/atlas-server/server/webapp/atlas
Then I stopped the atlas services from ambari and killed all related processes on the host.
Now, the Atlas Web UI works without problems. Let me know if you had the same results.
... View more
09-05-2017
10:03 PM
I had the same issue with sqoop. I solved it pointing my "target-dir" to a hdfs path where my user has read/write privileges.
... View more
10-21-2016
02:39 PM
1 Kudo
Anybody knows if it's possible to install Ambari Infra services using an existing Solr instance? What should be the procedure to do it?
... View more
Labels:
07-21-2016
07:46 PM
Did you solved the issue? We have the same erros in our 2.4.2 cluster. ERROR [alert-event-bus-2] AlertReceivedListener:341 - Unable to process alert ams_metrics_monitor_process for an invalid cluster named XXXX. WARN [qtp-ambari-client-68] ViewRegistry:855 - Could not find the cluster identified by XXXX.
... View more