Member since
10-20-2021
21
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
726 | 05-03-2022 01:54 PM |
08-28-2024
04:30 AM
I am unable to locate /hbase-secure znode , which one should i delete have the same issue , I am just having /hbase znode
... View more
05-16-2022
03:55 PM
@loridigia , The best way to avoid this problem is to configure and use Knox Gateway to access those services. Knox will handle the authentication and will ensure you can use those services without requiring any specific browser configuration. Cheers, André
... View more
05-03-2022
01:54 PM
As described in the link that i posted above, those steps works perfectly: P.S. i stopped ambari-server before doing those $ grep "password" /etc/ambari-server/conf/ambari.properties
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
$ echo "bigdata_custom" > /etc/ambari-server/conf/password.dat
$ sudo -u postgres psql
postgres=# ALTER USER ambari WITH PASSWORD 'bigdata_custom';
postgres=# \q
$ ambari-server restart
... View more
04-06-2022
05:17 AM
Someone know the solution?
... View more
02-12-2022
10:42 PM
INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory Above error is for AM and not for executors, hence you need to set the AM memory as spark.yarn.am.memory=2g
... View more
02-08-2022
04:01 AM
Hi @loridigia If cluster/application is not enabled dynamic allocation and if you set --conf spark.executor.instances=1 then it will launch only 1 executor. Apart from executor, you will see AM/driver in the Executor tab Spark UI.
... View more
02-02-2022
07:52 AM
Hello @loridigia I don't think there is a direct way to achieve this. But we have a workaround to do that. We can start the Spark jobs with Dynamic Allocation enabled. And we can set the Minimum executors to "0", initial executors to "1" and the idle timeout to "5s". With these configurations, the Spark job will start with 1 executor and after 5 seconds that container will be killed as it will be idle for more than 5 seconds. Now, we will have a Spark application only with the Driver / ApplicationMaster container running. CONFIGS: --conf spark.dynamicAllocation.enabled=true
--conf spark.shuffle.service.enabled=true
--conf spark.dynamicAllocation.executorIdleTimeout=5s
--conf spark.dynamicAllocation.initialExecutors=1
--conf spark.dynamicAllocation.maxExecutors=1
--conf spark.dynamicAllocation.minExecutors=1 NOTE: We can add these configs to the spark-defaults.conf so that the changes will be applied to all the Running jobs. Please be careful with other / actual Spark job configurations. Make sure to mark the answer as the accepted solution. If it resolves your issue !
... View more
01-07-2022
05:40 AM
@Lo Good Day!! Even though you submit spark-submit via JDBC the request goes through the HMS. HMS detects the type of client for interacting with it, for example, Hive or Spark, and acts accordingly. If you see any timeout requests in the logs you can adjust the below property. hive.metastore.client.socket.timeout=<to a higher value> Check and let us know If this helped.
... View more
11-09-2021
04:02 AM
Hi @loridigia, Based on the current error you provided "org.apache.hadoop.hbase.NotServingRegionException: table XXX is not online on worker04" maybe some regions are not deployed on any RegionServers yet. please check this result to see is there any inconsistencies on this table: 1. sudo -u hbase hbase hbck -details > /tmp/hbck.txt 2. If you see inconsistencies please grep ERROR from hbck.txt you will see which region has problem. 3. Then you need to check if this region's directory is complete in this result: hdfs dfs -ls -R /hbase 4. Then need to check in hbase shell : scan 'hbase:meta', if this region's info are updated in hbase:meta table. 5. Based on type of the issue we need to use hbck2 jar to fix the inconsistencies. https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 These are general steps to deal with this kind of problem, there could be more complex issues behind it. We suggest you to file a case with Cloudera support. Thanks, Will
... View more
11-02-2021
11:26 AM
1 Kudo
I fixed just editing the fields directly in the Ambari Postgres DB and then Rebooting.
... View more