Member since
02-23-2019
29
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3312 | 07-08-2016 02:32 AM |
03-16-2019
07:37 PM
It should be OK because MIT Kerberos client is running fine. [cloudera@~]$ telnet quickstart.cloudera 88
Trying 10.10.10.190...
Connected to quickstart.cloudera.
Escape character is '^]'.
^]
... View more
03-16-2019
07:09 PM
See below. cat "C:\ProgramData\MIT\Kerberos5\krb5.ini"
[libdefaults]
default_realm = CLOUDERA
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
CLOUDERA = {
kdc = quickstart.cloudera
admin_server = quickstart.cloudera
}
cat "C:\Users\All Users\MIT\Kerberos5\krb5.ini"
[libdefaults]
default_realm = CLOUDERA
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
CLOUDERA = {
kdc = quickstart.cloudera
admin_server = quickstart.cloudera
}
... View more
03-11-2019
08:02 PM
See below. Volume in drive C is OS
Volume Serial Number is 2261-6617
Directory of C:\ProgramData\Anaconda3\Library\bin
10/26/2018 02:44 PM 75,264 krb5.exe
1 File(s) 75,264 bytes
Directory of C:\ProgramData\Anaconda3\Library\include
12/20/2018 04:30 PM <DIR> krb5
05/03/2018 07:33 AM 402 krb5.h
1 File(s) 402 bytes
Directory of C:\ProgramData\Anaconda3\Library\include\krb5
10/26/2018 02:40 PM 342,049 krb5.h
1 File(s) 342,049 bytes
Directory of C:\ProgramData\MIT\Kerberos5
09/13/2018 09:44 PM 394 krb5.ini
1 File(s) 394 bytes
Directory of C:\Users\All Users\Anaconda3\Library\bin
10/26/2018 02:44 PM 75,264 krb5.exe
1 File(s) 75,264 bytes
Directory of C:\Users\All Users\Anaconda3\Library\include
12/20/2018 04:30 PM <DIR> krb5
05/03/2018 07:33 AM 402 krb5.h
1 File(s) 402 bytes
Directory of C:\Users\All Users\Anaconda3\Library\include\krb5
10/26/2018 02:40 PM 342,049 krb5.h
1 File(s) 342,049 bytes
Directory of C:\Users\All Users\MIT\Kerberos5
09/13/2018 09:44 PM 394 krb5.ini
1 File(s) 394 bytes
Directory of C:\Users\chenc5\AppData\Local\conda\conda\pkgs\krb5-1.16.1-hc04afaa_7\Library\bin
10/26/2018 02:44 PM 75,264 krb5.exe
1 File(s) 75,264 bytes
Directory of C:\Users\chenc5\AppData\Local\conda\conda\pkgs\krb5-1.16.1-hc04afaa_7\Library\include
12/20/2018 04:21 PM <DIR> krb5
05/03/2018 07:33 AM 402 krb5.h
1 File(s) 402 bytes
Directory of C:\Users\chenc5\AppData\Local\conda\conda\pkgs\krb5-1.16.1-hc04afaa_7\Library\include\krb5
10/26/2018 02:40 PM 342,049 krb5.h
1 File(s) 342,049 bytes
Total Files Listed:
11 File(s) 1,253,933 bytes
3 Dir(s) 273,797,316,608 bytes free
... View more
03-07-2019
09:00 PM
C:\ProgramData\MIT\Kerberos5\krb5.ini [libdefaults] default_realm = CLOUDERA dns_lookup_kdc = false dns_lookup_realm = false ticket_lifetime = 86400 renew_lifetime = 604800 forwardable = true default_tgs_enctypes = rc4-hmac default_tkt_enctypes = rc4-hmac permitted_enctypes = rc4-hmac udp_preference_limit = 1 kdc_timeout = 3000 [realms] CLOUDERA = { kdc = quickstart.cloudera admin_server = quickstart.cloudera } Thank you.
... View more
03-06-2019
04:59 AM
Yes, no problems. "quickstart.cloudera" is in the hosts file.
... View more
02-23-2019
10:42 PM
I encountered below error in my laptop while using the ODBC to Get Data in Power BI Desktop.
My environment:
Hadoop
Cloudera quickstart VM 5.13
Kerberized with MIT
Laptop
Windows 10 64 Bit:
Microsoft Hive ODBC 64 bit 2.1
MIT Kerberos Ticket Manager installed
/etc/krb5.conf [libdefaults] default_realm = CLOUDERA dns_lookup_kdc = false dns_lookup_realm = false ticket_lifetime = 86400 renew_lifetime = 604800 forwardable = true default_tgs_enctypes = rc4-hmac default_tkt_enctypes = rc4-hmac permitted_enctypes = rc4-hmac udp_preference_limit = 1 kdc_timeout = 3000 [realms] CLOUDERA = { kdc = quickstart.cloudera admin_server = quickstart.cloudera } [domain_realm]
Connect to Kerberized Hive thru Microsoft Hive ODBC:
From Power BI, Get Data -> ODBC -> select my ODBC
I entered username and password in Database tab:
username: cloudera
password: cloudera
Error in Power BI while connecting to Hive through the ODBC:
Details: "ODBC: ERROR [HY000] [Microsoft][Hardy] (34) Error from server: SASL(-1): generic failure: Failed to initialize security context: No authority could be contacted for authentication. . ERROR [HY000] [Microsoft][Hardy] (34) Error from server: SASL(-1): generic failure: Failed to initialize security context: No authority could be contacted for authentication.
I tested the ODBC in 64 Bit ODBC Admin without any errors. In addition, I am able to use the ODBC in DBeaver.
Please shed some light on this error.
Thank you.
... View more
Labels:
- Labels:
-
Apache Hive
-
Kerberos
-
Quickstart VM
02-09-2019
12:02 AM
Thank you for your reply. Yes, it's working now. I changed my commands as below - I added the escape in front of the $. export JAVA_HOME=/usr/jdk64/jdk1.8.0_112
./files/nifi-toolkit-*/bin/tls-toolkit.sh client -c $(hostname -f) -D "CN=hadoopadmin, OU=LAB.HORTONWORKS.NET" -p 10443 -t "Centos\$168Centos\$168" -T pkcs12
So the 16-byte is the parameter -t, not NiFi CA Token. 🙂 Thank you again. Cheers,
... View more
02-07-2019
11:26 PM
I am getting an error - "Token does not meet minimum size of 16 bytes" while generating a browser certificate for NiFi login. I tried to modify "NiFi CA Token" (nifi.toolkit.tls.token) to 20 characters - "Centos$168Centos$168" but still in vain. Which token is it? Or, my JAVA_HOME didn't set it correctly. Any suggestions? My Environment (only listed relevance): HDP-3.1.0.0 (3.1.0.0-78)
NiFi 1.7.0
Kerberized with AD (Win 2012R2)
My commands (Ran from Ambari server which is running NiFi): wget http://localhost:8080/resources/common-services/NIFI/1.0.0/package/archive.zip
unzip archive.zip
export JAVA_HOME=/usr/jdk64/jdk1.8.0_112
./files/nifi-toolkit-*/bin/tls-toolkit.sh client -c $(hostname -f) -D "CN=hadoopadmin, OU=LAB.HORTONWORKS.NET" -p 10443 -t Centos$168 -T pkcs12
Error: 2019/02/07 09:07:28 INFO [main] org.apache.nifi.toolkit.tls.commandLine.BaseTlsToolkitCommandLine: Command line argument --keyStoreType=pkcs12 only applies to keystore, recommended truststore type of JKS unaffected.
2019/02/07 09:07:28 INFO [main] org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: Requesting new certificate from hdp311.lab.hortonworks.net:10443
Service client error: java.security.GeneralSecurityException: Token does not meet minimum size of 16 bytes.
... View more
Labels:
- Labels:
-
Apache NiFi
01-11-2019
02:36 PM
Continued previous post: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) 19/01/10 22:57:57 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! 19/01/10 22:57:57 WARN MetricsSystem: Stopping a MetricsSystem that is not running Traceback (most recent call last): File "/usr/hdp/current/spark2-client/python/pyspark/shell.py", line 54, in <module> spark = SparkSession.builder.getOrCreate() File "/usr/hdp/current/spark2-client/python/pyspark/sql/session.py", line 173, in getOrCreate sc = SparkContext.getOrCreate(sparkConf) File "/usr/hdp/current/spark2-client/python/pyspark/context.py", line 353, in getOrCreate SparkContext(conf=conf or SparkConf()) File "/usr/hdp/current/spark2-client/python/pyspark/context.py", line 119, in __init__ conf, jsc, profiler_cls) File "/usr/hdp/current/spark2-client/python/pyspark/context.py", line 181, in _do_init self._jsc = jsc or self._initialize_context(self._conf._jconf) File "/usr/hdp/current/spark2-client/python/pyspark/context.py", line 292, in _initialize_context return self._jvm.JavaSparkContext(jconf) File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__ File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext. : org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1547158600691_0003 to YARN : org.apache.hadoop.security.AccessControlException: Queue root.default already has 0 applications, cannot accept submission of application: application_1547158600691_0003 at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:174) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164) at org.apache.spark.SparkContext.<init>(SparkContext.scala:500) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748)
... View more
01-11-2019
02:36 PM
Thank you for posting this very informative Hive Warehouse Connector article. I followed your steps but got errors when running pyspark, see below my steps and error. Can you shed some light on this error. Thank you in advance for your help.
1)Downloaded HDP Sandbox 3.0.1 and set it up Hive: 3.1.0 Spark: 2.3.1
2)Enabled ‘Interactive Query’ in Hive 3)Appended the following to /usr/hdp/3.0.1.0-187/etc/spark2/conf/spark-defaults.conf spark.hadoop.hive.llap.daemon.service.hosts @llap0 spark.sql.hive.hiveserver2.jdbc.url jdbc:hive2://sandbox-hdp.hortonworks.com:10000 spark.datasource.hive.warehouse.load.staging.dir /tmp spark.datasource.hive.warehouse.metastoreUri thrift://sandbox-hdp.hortonworks.com:9083 spark.hadoop.hive.zookeeper.quorum sandbox-hdp.hortonworks.com:2181
4)Ran pyspark:
pyspark --master yarn \ --jars /usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-1.0.0.3.0.1.0-187.jar \
--py-files /usr/hdp/current/hive_warehouse_connector/pyspark_hwc-1.0.0.3.0.1.0-187.zip \
--conf spark.security.credentials.hiveserver2.enabled=false
5)Got below error:
[root@~]# pyspark --master yarn \ > --jars /usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-1.0.0.3.0.1.0-187.jar \
> --py-files /usr/hdp/current/hive_warehouse_connector/pyspark_hwc-1.0.0.3.0.1.0-187.zip \
> --conf spark.security.credentials.hiveserver2.enabled=false
SPARK_MAJOR_VERSION is set to 2, using Spark2
Python 2.7.5 (default, Jul 13 2018, 13:06:57)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/01/10 22:57:49 ERROR SparkContext: Error initializing SparkContext.
org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1547158600691_0002 to YARN : org.apache.hadoop.security.AccessControlException: Queue root.default already has 0 applications, cannot accept submission of application: application_1547158600691_0002
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:174)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
19/01/10 22:57:49 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
19/01/10 22:57:49 WARN MetricsSystem: Stopping a MetricsSystem that is not running
19/01/10 22:57:49 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
py4j.Gateway.invoke(Gateway.java:238)
py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
py4j.GatewayConnection.run(GatewayConnection.java:238)
java.lang.Thread.run(Thread.java:748)
19/01/10 22:57:57 ERROR SparkContext: Error initializing SparkContext.
org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1547158600691_0003 to YARN : org.apache.hadoop.security.AccessControlException: Queue root.default already has 0 applications, cannot accept submission of application: application_1547158600691_0003
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:174)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
... View more