Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2663 | 12-25-2018 10:42 PM | |
12195 | 10-09-2018 03:52 AM | |
4200 | 02-23-2018 11:46 PM | |
1887 | 09-02-2017 01:49 AM | |
2208 | 06-21-2017 12:06 AM |
02-14-2017
10:56 AM
Debug log says that I have "error Message is KDC has no support for encryption type" on krbtgt/HDP-NET.COM@PQR-NET.COM and the type given as "EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType". But I have that set in my krb5.conf as an enctype, and AD supports it as well. No idea...
... View more
02-14-2017
08:27 AM
Hi @Srikanth Puli, you cannot get this information before running the your query. Statistics can return only general information about the table, like how many records, total size in bytes, etc. How can anyone know how many non-null c1 fields are there without running the query? And how about non-null c1 and non-null c2 and ... many other conditions? So, please run the query to find out: SELECT COUNT(*), MAX(LENGTH(t1.c1)) FROM t1 WHERE t1.c1 IS NOT NULL; provided that c1 is string.
... View more
02-14-2017
06:29 AM
Is there any way to troubleshoot & find out what's wrong with one-way trust from a KDC to AD? My problem is that the AD domain is set in lower-case letters: pqr-net.com. KDC on the cluster side is up and running and the cluster is kerberized against the KDC and works fine. Users registered on KDC can use the cluster without problems. For AD users, I followed the steps from documentation and from here. My HDP realm is HDP-NET.COM. As an additional realm in my kdc5.conf I have set: PQR-NET.COM in capitals and I can do "kinit aduser1@PQR-NET.COM" and obtain a ticket. [I also tried to set a domain in lower-case letters like pqr-net.com but in that case kinit doesn't work.] So, aduser1 can get a ticket, but cannot access the cluster: "hdfs dfs -ls" returns: 17/02/14 13:02:37 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/02/14 13:02:37 WARN ipc.Client: Couldn't setup connection for aduser1@PQR-NET.COM to h1002.pqr-net.com/192.168.31.167:8020
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)] where h1002.pqr-net.com/192.168.31.167:8020 is my active NN. On AD I did: ksetup /addkdc HDP-NET.COM kdchost.pqr-net.com
netdom trust HDP-NET.COM /Domain:PQR-NET.COM /add /realm /passwordt:mypassword
and in my KDC I created a principal "krbtgt/HDP-NET.COM@PQR-NET.COM" and set his password to "mypassword". I have also added rules in my auth_to_local for AD users. Besides the error above, the only other error I could find was in krb5kdc.log, but only for a short period of time, it doesn't appear any more:(Error): TGS_REQ: UNKNOWN SERVER: server='krbtgt/HDP-NET.COM@PQR-NET.COM'. I suspect the problem is the AD domain in lower-case letters, but I'm not sure. Any help will be appreciated.
... View more
Labels:
02-13-2017
10:44 PM
You may need to run "hbck -repair" several times. It's good to disable writes during repair. After each "repair", check again HBase Web UI, or run again just hbck to find out has something changed, like less RITs, or less inconsistencies. Regarding 2 RITs, you can also check their hdfs directories, to confirm they are healthy (no missing blocks). Finally, you can try to restart Region servers hosting RITs. So, your first target should be to get rid of RITs, and then other inconsistencies.
... View more
02-12-2017
11:42 PM
In the list of nodes you are adding to the cluster (those 3), use FQDNs as hostnames, and avoid using "localhost". Also make sure "hostname -f" command on your Ambari node returns its FQDN. If all that is set, you can try running "ambari-agent reset <Ambari-node-FQDN>" to tell ambari-agents about your Ambari host. That property you can also find in /etc/ambari-agent/conf/ under [server] --> hostname, on each node.
... View more
02-12-2017
05:40 AM
A Spark session called "spark" is created when you run spark-shell in Spark2. You can just say as below. And please accept the answer if it was helpful. Thanks! scala> spark.sql("show tables").show()
+-----------+-----------+
| tableName|isTemporary|
+-----------+-----------+
| flights| false|
|flights_ext| false|
+-----------+-----------+
... View more
02-12-2017
05:21 AM
2 Kudos
Sorry, in Spark2, sqlContext is not created by spark-shell, you can create it by yourself from sc, followed by impor implicits. Then it should work. You can also replace sqlContext by hiveContext scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
scala> import sqlContext.implicits._
scala> sqlContext.sql("show tables").show()
+-----------+-----------+
| tableName|isTemporary|
+-----------+-----------+
| flights| false|
|flights_ext| false|
+-----------+-----------+
You can also replace sqlContext by hiveContext scala> val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) Actually, in Spark2, you are encouraged to use SparkSessions, see the link for details. It includes sqlContext functionality.
... View more
02-12-2017
04:47 AM
1 Kudo
You are trying to run Spark2 spark-shell, have you done "export SPARK_MAJOR_VERSION=2" ?
... View more
02-11-2017
11:25 PM
Your /tmp/data on hdfs is a file, not a directory. So, when you did the copy for the first time tmp]$ hdfs dfs -copyFromLocal wordFile.txt /tmp/data/ wordFile.txt was copied to /tmp and renamed to "data". That's why the second time the command complains that the file exists, because by default "-put" or "-copyFromLocal" don't owerwrite target files. You can force overwrite by adding "-f": tmp]$ hdfs dfs -copyFromLocal -f wordFile.txt /tmp/data/ If you copy to a directory, than the original file name will be preserved: tmp]$ hdfs dfs -copyFromLocal wordFile.txt /tmp
will create /tmp/wordFile.txt on hdfs.
... View more
02-10-2017
01:53 PM
1 Kudo
Yes, correct, you have already dedicated one quarter of your nodes to masters, so pack all master components there and leave the rest to do processing. Regarding SOLR and ES you can install them like in my proposal above. And no need for HDFS federation because the cluster is not that large.
... View more