Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2024 | 04-27-2020 03:48 AM | |
4012 | 04-26-2020 06:18 PM | |
3231 | 04-26-2020 06:05 PM | |
2594 | 04-13-2020 08:53 PM | |
3847 | 03-31-2020 02:10 AM |
02-13-2020
02:23 PM
@AarifAkhter We see the error is cause by some DB access/connect Caused by: java.lang.RuntimeException: Error while creating database accessor
at org.apache.ambari.server.orm.DBAccessorImpl.<init>(DBAccessorImpl.java:120) So few queries: 1. Is this a newly setup ambari server? 2. Do you see the DataBase is running fine and the DB host/port is accessible ? 3. Are you using Embedded Postgres Database? 4. When was last time ambari server running fine? Any recent chances made to ambari server config/host ? I am suspecting that there should be some more detailed log like some more "Caused By" section in your ambari-server.log which is not copied fully in your last update. So can you please recheck your ambari log and let us know what was the first error and if you can share the Full stack trace. some times there are more than on "Caused By" section in a single stack trace.
... View more
02-13-2020
02:04 PM
@AarifAkhter As the failure message says : Please check the logs for more information. So in order to find out the cause of failure we will need to look at ambari server logs for more detailed messages. Hence please share the following files for initial investigation.. /var/log/ambari-server/ambari-server.log
/var/log/ambari-server/ambari-server.out .
... View more
02-11-2020
11:39 PM
@TR7_BRYLE As requested earlier - If you still face any issue then can you please share the "ambari-agent.log" freshly after restarting it ?
... View more
02-11-2020
05:40 PM
@TR7_BRYLE The error is actually due to timeout (and not because of port access) SSLError('The read operation timed out',) Above error indicates that communication further like reading a response is timing out. So we will have to first check why the "https" request is being timed out. We can try using the following kind of simple Python script to simulate what agent actually tries. Ambari agent is a python utility which tries to connect to ambari server a d tries to register itself and sends heartbeat messages to ambari server. So we can test the following script from the agent host to see if it is able to connect or if that is also getting timed out. We are using 'httplib' to test the access and Https communication. # cat /tmp/SSL/ssl_test.py
import httplib
import ssl
if __name__ == "__main__":
ca_connection = httplib.HTTPSConnection('kerlatest1.example.com:8440', timeout=5, context=ssl._create_unverified_context())
ca_connection.request("GET", '/connection_info')
response = ca_connection.getresponse()
print response.status
data = response.read()
print str(data) Run it like following: # export PYTHONPATH=/usr/lib/ambari-agent/lib:/usr/lib/ambari-agent/lib/ambari_agent:$PYTHONPATH
# python /tmp/SSL/ssl_test.py If above works fine and it returns 200 and returns result like following: # python /tmp/SSL/ssl_test.py
200
{"security.server.two_way_ssl":"false"} If you notice any HTTPS communitation or certificat related error then you might want to refer to the following article and according to your Ambari version please check if you have following defined in your ambari-agent.ini file "[security]" section? [security]
force_https_protocol=PROTOCOL_TLSv1_2 - If you still face any issue then can you please share the "ambari-agent.log" freshly after restarting it ? Reference Article: Java/Python Updates and Ambari Agent TLS Settings https://community.cloudera.com/t5/Community-Articles/Java-Python-Updates-and-Ambari-Agent-TLS-Settings/ta-p/248328 . .
... View more
02-05-2020
11:01 PM
@MortyCodes Just to confirm .. you want to pass a Python program file name to the "ExecuteStreamCommand" Processor? ExecuteStreamCommand Properties. Command Arguments: /Jay/NifiDemo/test_python.py
Command Path: /bin/python
Argument Delimiter: ; I tried with the above approach and i can see the script is getting executed fine. Is that something what you are looking out for? .
... View more
02-05-2020
10:33 PM
@gvbnn Those are just warnings you can ignore them ... it should not be causing Job failure. As we also see that the application_id "application_1580968178673_0001" is also generated ... So you should be able to check the status of your Yarn application in ResourceManagr UI. http://$RM_ADDRESS:8088/cluster/apps If your cluster has enough resources then you should see the progress as well for your application_id ...
... View more
02-05-2020
10:24 PM
@gvbnn Those are just WARNING messages. Where do you see the errors? Can you please share more details if you are noticing any error?
... View more
02-05-2020
02:20 PM
@S_Waseem I tried to insert the following statement to my hive table customer using your approach and it worked with a slight modification in your command. Insert into customer values (5000, "CustFive", "BRN"); So can you please check and compare it with your command? Changes: Using the sudo approach which you mentioned by supplying the username and password to beeline using -n and -p options as following. The values in quotation mark were changed from "CustFive" to \"CustFive\" as they are surrounded by the -c " statement" Example Output: [root@newhwx1 ~]# export user_hive=hive
[root@newhwx1 ~]# echo ${user_hive}
hive
[root@newhwx1 ~]# sudo su - ${user_hive} -c "beeline -n hive -p hive -u 'jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' -e 'insert into customer values (5000, \"CustFive\", \"BRN\")'"
Connecting to jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connected to: Apache Hive (version 1.2.1000.2.6.5.0-292)
Driver: Hive JDBC (version 1.2.1000.2.6.5.0-292)
Transaction isolation: TRANSACTION_REPEATABLE_READ
INFO : Tez session hasn't been created yet. Opening session
INFO : Dag name: insert into customer values (5000, ..."BRN")(Stage-1)
INFO : Status: Running (Executing on YARN cluster with App id application_1579040432494_27933)
INFO : Loading data to table default.customer from hdfs://My-NN-HA/apps/hive/warehouse/customer/.hive-staging_hive_2020-02-05_22-12-33_107_183476754828623767-2/-ext-10000
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 1 1 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 01/01 [==========================>>] 100% ELAPSED TIME: 6.46 s
--------------------------------------------------------------------------------
INFO : Table default.customer stats: [numFiles=5, numRows=5, totalSize=90, rawDataSize=85]
No rows affected (22.328 seconds)
Beeline version 1.2.1000.2.6.5.0-292 by Apache Hive
Closing: 0: jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 Output of "select * from customer" later. [root@newhwx1 ~]# sudo su - ${user_hive} -c "beeline -n hive -p hive -u 'jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' -e 'select * from customer'"
Connecting to jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connected to: Apache Hive (version 1.2.1000.2.6.5.0-292)
Driver: Hive JDBC (version 1.2.1000.2.6.5.0-292)
Transaction isolation: TRANSACTION_REPEATABLE_READ
+------------------+--------------------+----------------+--+
| customer.custid | customer.custname | customer.city |
+------------------+--------------------+----------------+--+
| 1000 | CustOne | BLR |
| 2000 | CustTwo | PUNE |
| 3000 | CustThree | HYD |
| 4000 | CustFour | NSW |
| 5000 | CustFive | BRN |
+------------------+--------------------+----------------+--+
5 rows selected (1.108 seconds)
Beeline version 1.2.1000.2.6.5.0-292 by Apache Hive
Closing: 0: jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 . .
... View more
02-04-2020
11:25 PM
@Quang_Vu_Blog As per kafka-connect docs the default port "rest.port" is 8083 rest.port - the port the REST interface listens on for HTTP requests So are you getting conflict on port 8003 (or there is a typo ? is it 8083) Can you try changing the "rest.port" in your worker config to something else and then try again. Also please try to run the below commands before starting kafka-connect to verify if there is any port conflict? Or if there are any bind address issue # netstat -tnlpa | grep 8083
# netstat -tnlpa | grep 8003 .
... View more
02-04-2020
02:37 AM
@Kureikana Have you explored the Ambari File View option to download files from HDFS using browser? https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/configuring-ambari-views/content/amb_create_and_configure_a_files_view_instance.html
... View more