Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 502 | 06-04-2025 11:36 PM | |
| 1046 | 03-23-2025 05:23 AM | |
| 547 | 03-17-2025 10:18 AM | |
| 2044 | 03-05-2025 01:34 PM | |
| 1279 | 03-03-2025 01:09 PM |
03-23-2020
12:14 PM
@kvinod Your issue can be resolved by merging the keytabs in question. Merge keytab files If you have multiple keytab files that need to be in one place, you can merge the keys with the ktutil command. Depending on whether you are using MIT or Heimdal Kerberos the process is different but to merge keytab files using MIT Kerberos, use: In the below example I am merging [mcaf.keytab],[hbase.keytab] and [zk.keytab] into mcafmerged.keytab you can merge n number of keytabs but you must ensure the user executing has the correct permissions, it could be a good idea to copy the keytabs and merge them from the users' home directory $ ktutil
ktutil: read_kt mcaf.keytab
ktutil: read_kt hbase.keytab
ktutil: read_kt zk.keytab
ktutil: write_kt
ktutil: quit To verify the merge Use $ klist -k mcafmerged.keytab Now to access hbase $ sudo kinit -kt mcafmerged.keytab mcaf@Domain.ORG The keytab file is independent of the computer it's created on, its filename, and its location in the file system. Once it's created, you can rename it, move it to another location on the same compute.
... View more
03-12-2020
04:33 PM
@Joe_Jim By default, the broker binds to localhost. Can you share your server.properties and logs? I am of the opinion you need to update the server.properties listeners=PLAINTEXT://<IP:_TO_CHANGE>:9092
... View more
02-10-2020
07:59 AM
@saivenkatg55 That could be a memory issue on your cluster. Can you share the below config set spark.executor.memory
set yarn.nodemanager.resource.memory-mb
set yarn.scheduler.maximum-allocation-mb Here are some links to help How to calculate node and executors memory in Apache Spark after adjusting that share the new output
... View more
02-03-2020
11:08 AM
@mike_bronson7 I will not have access to my environment for 4 days as I am traveling but I think you can filter using service_name=HDFS I need to test that but that's the way
... View more
02-03-2020
08:36 AM
1 Kudo
@mike_bronson7 Can you run the below it should give the desired response curl -u admin:admin -H "X-Requested-By:ambari" -X GET "http://<ambari-server>:<port>/api/v1/clusters/<clustername>/host_components?HostRoles/stale_configs=false&fields=HostRoles/service_name Please let me know
... View more
01-31-2020
05:58 AM
1 Kudo
@Manoj690 It's always a good idea to share the HDP and Zk version plus the zk logs in /var/log/* having said that can you share your zoo.cfg ? If you really need enable all four letter word commands by default, you can use the asterisk option so you don't have to include every command one by one in the list.See below 4lw.commands.whitelist=* As you have not shared your logs that's a starting point, then restart your zookeeper and let me know!
... View more
01-23-2020
03:34 AM
@kirkade Unfortunately I have MySQL databases but if you could conserve that instalment I could try to reproduce your problem this weekend. I have been overloaded these past days ....To my experience reinstallation might not resolve the problem its usually a good learning curve to face a problem head one just imagine ypu were at a client site you wouldn't ask them to reinstall the environment. Just share the create and postgres load scripts and version so I can reproduce your problem.
... View more
01-20-2020
06:21 PM
@saivenkatg55 That's the desired results once you enable Ranger plugin for hive. As you said permissions managed in ranger. Guessing from your scambled URLjdbc:hive2://w0lxqhdp03:2181/w0lxq check under the ranger policies and ensure the user executing the SQL has SELECT on the underlying database w0lxq have a look at this hive /ranger security guidance HTH
... View more
01-20-2020
01:29 AM
@kirkade I thought you had the output directory and it was the connect command which was giving you issues! If you had time to read through the sqoop documentation you will see that you need to give sqoop the destination directory. See the high lighted options This is how your command should have been , the below assumes you have the privileges to write to HDFS directory /user/kirkade sqoop import \
--connect jdbc:postgresql://127.0.0.1:54322/postgres \
--username postgres \
--password xxxx \
--table dim_parameters \
--schema dwh \
--target-dir /user/kirkade Hope that helps
... View more
01-19-2020
09:42 AM
@kirkade I can see 2 syntax errors one on -table dim_parameters should be double dash [ -- ]and between dim_parameters -- --schema dwh the [-- -- ] sqoop import --connect jdbc:postgresql://127.0.0.1:54322/postgres --username postgres -P -table dim_parameters -- --schema dwh Could you try the below syntax I intentionally added the --password sqoop import \
--connect jdbc:postgresql://127.0.0.1:54322/postgres \
--username postgres \
--password xxxx \
--table dim_parameters \
--schema dwh Please let me know if that helped.
... View more