Member since
10-20-2016
106
Posts
0
Kudos Received
0
Solutions
04-17-2023
08:30 AM
Hi @saivenkatg55 Could you please check where datalakedev host name is configured in your hadoop/hive configuration files? and also check you are able to ping the datalakedev hostname where you are running spark-sql command.
... View more
05-24-2022
08:11 AM
Hi! Did you could able to solve it?? Im facing the same problem with the same scenario.
... View more
03-02-2022
10:11 PM
1 Kudo
Stop CDH Services and Stop Cloudera Manager Management Services. Import the new kerberos account. You will need an admin account on the KDC for this: CM UI -> Administration -> Security -> Kerberos credentials -> "Import Kerberos Account Manager Credentials" Enter username and password Click Import button Re-generate missing principals if the previous step was successful CM UI -> Administration -> Security -> "Kerberos credentials" Click the button "Generate Missing Credentials" Wait until credentials have been generated Start Cloudera Manager Management Services Start CDH Services
... View more
12-29-2021
08:42 AM
@ebeb , as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
01-05-2021
11:06 AM
@saivenkatg55 My Assumptions You already executed the HDP environment preparation. If not see prepare the environment https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/prepare_the_environment.html You are running on Linux [RedHat, Centos] and you have root access! Note: Replace test.ambari.com with the output of your $ hostname -f Re-adapt to fit your cluster # root password = welcome1
# hostname = test.ambari.com
# ranger user and password is the same Steps Install the MySQL connector if not installed [Optional] # yum install -y mysql-connector-java Shutdown Ambari # ambari-server stop Re-run the below command it won't hurt # ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Backup the ambari server properties file # cp /etc/ambari-server/conf/ambari.properties /etc/ambari-server/conf/ambari.properties.bak Change the timeout of the ambari server # echo 'server.startup.web.timeout=120' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.acquisition-size=5' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-age=0' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-idle-time=14400' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-idle-time-excess=0' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.idle-test-interval=7200' >> /etc/ambari-server/conf/ambari.properties Recreate a new ranger schema & Database # mysql -u root -pwelcome1
CREATE USER 'rangernew'@'%' IDENTIFIED BY 'rangernew';
GRANT ALL PRIVILEGES ON *.* TO 'rangernew'@'localhost';
CREATE USER 'rangernew'@'%' IDENTIFIED BY 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'%';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'localhost' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON rangernew.* to 'rangernew'@'localhost' identified by 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* to 'rangernew'@'test.ambari.com' identified by 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'test.ambari.com';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
quit; Create the new ranger database # mysql -u rangernew -prangernew
create database rangernew;
show databases;
quit; Start the ambari server # ambari-server start
......Desired output.........
..................
.................
Ambari Server 'start' completed successfully. For ranger Ambari UI setup Use the hostname in this example test.ambari.com and the corresponding passwords Test the Ranger DB connectivity The connection test should succeed if it does then you can now start Ranger successfully. Drop the old Ranger DB # mysql -u root -pwelcome1
mysql> Drop database old_Ranger_name; The above steps should resolve your Ranger issue. Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, Kudos this answer by hitting the thumbs up button.
... View more
09-04-2020
10:14 PM
@saivenkatg55 Use the below commands to remove the background operation entry in ambari database. # select task_id,role,role_command from host_role_command where status='IN_PROGRESS'; The above command will list all IN_PROGRESS status , You can also check for QUEUED or PENDING STATE by replacing 'IN_PROGRESS' with QUEUED or PENDING # update host_role_command set status='ABORTED' where status='QUEUED'; Use the above command to change the state to ABORTED
... View more
03-16-2020
04:45 PM
@saivenkatg55 Sorry to hear you are having space issues with your content repository. The most common reason for space issues is because there are still active FlowFiles referencing the content claims. Since a content claim cannot be moved to archive sub-directory or deleted until there are no FlowFiles referencing that claim, even a small FlowFile can still queued somewhere within a dataflow can result in a large claim not being able to be removed. I recommend using the NiFi Summary UI (Global menu --> Summary) to locate connections with flowfiles just sitting in them not getting processed. Look at the connections tab and click on "queue" to sort connections based on queued FlowFiles. A connection with queued FlowFiles, but shows 0 for both "In/Size" and "Out/Size" are what I would be looking for. This indicates in the last 5 minutes that queue has not changed in amount of queued FlowFiles. You can use the got to arrow to far right to jump to that connection on the canvas. If that data is not needed (just left over in some non active dataflow), right click on the connection to empty the queue. See if after clearing some queues the content repo usage drops. Is is also possible that not enough file handles exist for your NiFi service user making clean-up not to work efficiently. I recommend increasing the open files limit and process limits for your NiFi service user. Check to see if your flowfile_repository is large or if you have content claims moved to archive sub-directories that have not yet been purged. Does restart of NiFi which would release file handles trigger some cleanup of the repo(s) on startup? It is also dangerous to have all your NiFi repos co-located on the same disk for risk of corruption to your flowfile repository which can lead to data loss. The flowfile_repository should always be on its own disk, the content_repository should be on its own disk, and the provenance_repository should be on its own disk. The database repository can exist on a disk used for other NiFi files (config files, local state, etc.) https://community.cloudera.com/t5/Community-Articles/HDF-NIFI-Best-practices-for-setting-up-a-high-performance/ta-p/244999 Here are some additional articles that may help you: https://community.cloudera.com/t5/Community-Articles/Understanding-how-NiFi-s-Content-Repository-Archiving-works/ta-p/249418 https://community.cloudera.com/t5/Community-Articles/How-to-determine-which-FlowFiles-are-associated-to-the-same/ta-p/249185 Hope this helps, Matt
... View more
02-25-2020
11:05 AM
@saivenkatg55 You need to literally use ./keystore.p12 in your command instead of just keystore.p12 curl --cert-type P12 --cert ./keystore.p12:password --cacert nifi-cert.pem -v https://w0lxqhdp04:9091/nifi-api/flow/search-results?q= Hope this helps, Matt
... View more
02-10-2020
07:59 AM
@saivenkatg55 That could be a memory issue on your cluster. Can you share the below config set spark.executor.memory
set yarn.nodemanager.resource.memory-mb
set yarn.scheduler.maximum-allocation-mb Here are some links to help How to calculate node and executors memory in Apache Spark after adjusting that share the new output
... View more