Member since
05-11-2016
35
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1531 | 12-06-2016 04:19 PM |
11-16-2022
01:24 AM
How would you check logs related to ldap , In mine all docker-container like superset_app , superset-worker showing no error, but i can't be able to log from normal user either or ldap one My configured things from flask_appbuilder.security.manager import AUTH_LDAP
AUTH_TYPE = AUTH_LDAP
AUTH_USER_REGISTRATION = True
AUTH_LDAP_SERVER = "ldap://localhost:389"
# AUTH_LDAP_SEARCH="ou=people,dc=superset,dc=com"
AUTH_LDAP_SEARCH= "cn=admin,dc=ramhlocal,dc=com"
# AUTH_LDAP_APPEND_DOMAIN = "XXX.com"
AUTH_LDAP_UID_FIELD="cn"
AUTH_LDAP_FIRSTNAME_FIELD= "Rohit"
AUTH_LDAP_LASTTNAME_FIELD= "sn"
AUTH_LDAP_USE_TLS = False
# AUTH_LDAP_UID_FIELD=sAMAccountName
# AUTH_LDAP_BIND_USER=CN=Bind,OU=Admin,dc=our,dc=domain
AUTH_LDAP_ALLOW_SELF_SIGNED= True
AUTH_LDAP_APPEND_DOMAIN= False
... View more
06-11-2020
01:27 PM
Our installation had the password hash in another table. update ambari.user_authentication set authentication_key='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' where user_id='1' Note: user_id=1 was the admin in my case.
... View more
04-19-2019
01:28 AM
@Jeff Watson Could you try using GetHDFSFileInfo processor, as this processor accepts incoming connections and regex to match only the required directories/files/exclude files..!
... View more
07-04-2017
12:59 AM
I forgot I ripped the --jars part out of the spark-submit above because the text was too long. Here's that part. I'm seeing on some other gripes in StackOverflow about database drivers missing points to the driver not being included in the class path. As you can see in the --conf spark.driver.extraClassPath, --conf spark.executor.extraClassPath, and --jars, I tried to provide the /usr/hdp/current/phoenix-client/phoenix-client.jar driver in all contexts. spark-submit --jars /home/jwatson/sdr/bin/e2parser-1.0.jar,/home/jwatson/sdr/bin/f18parser-1.0.jar,/home/jwatson/sdr/bin/mdanparser-1.0.jar,/home/jwatson/sdr/bin/regimerecog-1.0.jar,/home/jwatson/sdr/bin/tsvparser-1.0.jar,/home/jwatson/sdr/bin/xmlparser-1.0.jar,/home/jwatson/sdr/bin/aws-java-sdk-1.11.40.jar,/home/jwatson/sdr/bin/aws-java-sdk-s3-1.11.40.jar,/home/jwatson/sdr/bin/jackson-annotations-2.6.5.jar,/home/jwatson/sdr/bin/jackson-core-2.6.5.jar,/home/jwatson/sdr/bin/jackson-databind-2.6.5.jar,/home/jwatson/sdr/bin/jackson-module-paranamer-2.6.5.jar,/home/jwatson/sdr/bin/jackson-module-scala_2.10-2.6.5.jar,/home/jwatson/sdr/bin/miglayout-swing-4.2.jar,/home/jwatson/sdr/bin/commons-configuration-1.6.jar,/home/jwatson/sdr/bin/xml-security-impl-1.0.jar,/home/jwatson/sdr/bin/metrics-core-2.2.0.jar,/home/jwatson/sdr/bin/jcommon-1.0.0.jar,/home/jwatson/sdr/bin/ojdbc6.jar,/home/jwatson/sdr/bin/jopt-simple-4.5.jar,/home/jwatson/sdr/bin/ucanaccess-3.0.1.jar,/home/jwatson/sdr/bin/httpcore-nio-4.4.5.jar,/home/jwatson/sdr/bin/nifi-site-to-site-client-1.0.0.jar,/home/jwatson/sdr/bin/nifi-spark-receiver-1.0.0.jar,/home/jwatson/sdr/bin/commons-compiler-2.7.8.jar,/home/jwatson/sdr/bin/janino-2.7.8.jar,/home/jwatson/sdr/bin/hsqldb-2.3.1.jar,/home/jwatson/sdr/bin/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar,/home/jwatson/sdr/bin/slf4j-api-1.7.21.jar,/home/jwatson/sdr/bin/slf4j-log4j12-1.7.21.jar,/home/jwatson/sdr/bin/slf4j-simple-1.7.21.jar,/home/jwatson/sdr/bin/snappy-java-1.1.1.7.jar,/home/jwatson/sdr/bin/snakeyaml-1.7.jar,local://usr/hdp/current/hadoop-client/client/hadoop-common.jar,local://usr/hdp/current/hadoop-client/client/hadoop-mapreduce-client-core.jar,local://usr/hdp/current/hadoop-client/client/jetty-util.jar,local://usr/hdp/current/hadoop-client/client/netty-all-4.0.23.Final.jar,local://usr/hdp/current/hadoop-client/client/paranamer-2.3.jar,local://usr/hdp/current/hadoop-client/lib/commons-cli-1.2.jar,local://usr/hdp/current/hadoop-client/lib/httpclient-4.5.2.jar,local://usr/hdp/current/hadoop-client/lib/jetty-6.1.26.hwx.jar,local://usr/hdp/current/hadoop-client/lib/joda-time-2.8.1.jar,local://usr/hdp/current/hadoop-client/lib/log4j-1.2.17.jar,local://usr/hdp/current/hbase-client/lib/hbase-client.jar,local://usr/hdp/current/hbase-client/lib/hbase-common.jar,local://usr/hdp/current/hbase-client/lib/hbase-hadoop-compat.jar,local://usr/hdp/current/hbase-client/lib/hbase-protocol.jar,local://usr/hdp/current/hbase-client/lib/hbase-server.jar,local://usr/hdp/current/hbase-client/lib/protobuf-java-2.5.0.jar,local://usr/hdp/current/hive-client/lib/antlr-runtime-3.4.jar,local://usr/hdp/current/hive-client/lib/commons-collections-3.2.2.jar,local://usr/hdp/current/hive-client/lib/commons-dbcp-1.4.jar,local://usr/hdp/current/hive-client/lib/commons-pool-1.5.4.jar,local://usr/hdp/current/hive-client/lib/datanucleus-api-jdo-4.2.1.jar,local://usr/hdp/current/hive-client/lib/datanucleus-core-4.1.6.jar,local://usr/hdp/current/hive-client/lib/datanucleus-rdbms-4.1.7.jar,local://usr/hdp/current/hive-client/lib/geronimo-jta_1.1_spec-1.1.1.jar,local://usr/hdp/current/hive-client/lib/hive-exec.jar,local://usr/hdp/current/hive-client/lib/hive-jdbc.jar,local://usr/hdp/current/hive-client/lib/hive-metastore.jar,local://usr/hdp/current/hive-client/lib/jdo-api-3.0.1.jar,local://usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar,local://usr/hdp/current/phoenix-client/phoenix-client.jar,local://usr/hdp/current/spark-client/lib/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar
... View more
12-06-2016
04:19 PM
I finally tracked it down, the clue which I didn't notice until I pasted the error message above was that the error was getting a connection refused on the DistributedCacheClient. I set the properties in the GetHBase | DistributedCacheClient, but hadn't created the DistributedCacheServer in the global settings for NiFi. Once I did that, then I hit HBase errors (to make it more fun HBase happened to have been down while I was testing). But once I fixed that, everything started to work.
... View more
11-04-2017
12:19 PM
Hi @Jeff Watson. You are correct about SAS use of String datatypes. Good catch! One of my customers also had to deal with this. String datatype conversions can perform very poorly in SAS. With SAS/ACCESS to Hadoop you can set the libname option DBMAX_TEXT (added with SAS 9.4m1 release) to globally restrict the character length of all columns read into SAS. However for restricting column size SAS does specifically recommends using the VARCHAR datatype in Hive whenever possible. http://support.sas.com/documentation/cdl/en/acreldb/67473/HTML/default/viewer.htm#n1aqglg4ftdj04n1eyvh2l3367ql.htm Use Case
Large Table, All Columns of Type String: Table A stored in Hive has 40 columns, all of type String, with 500M rows. By default, SAS Access converts String to $32K. So, 32K in length for char. The math for this size table yields 1.2MB row length x 500M rows. This causes the system to come to a halt - Too large to store in LASR or WORK. The following techniques can be used to work around the challenge in SAS, and they all work:
Use char and varchar in Hive instead of String. Set the libname option DBMAX_TEXT to globally restrict the character length of all columns read in In Hive do "SET TBLPROPERTIES SASFMT" to add formats for SAS on schema in HIVE. Add formatting to SAS code during inbound reads
example: Sequence Length 8 Informat 10. format 10. I hope this helps.
... View more