Member since
03-14-2016
4721
Posts
1108
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1221 | 04-27-2020 03:48 AM | |
1866 | 04-26-2020 06:18 PM | |
1736 | 04-26-2020 06:05 PM | |
1272 | 04-13-2020 08:53 PM | |
1599 | 03-31-2020 02:10 AM |
02-24-2020
03:06 PM
@mike_bronson7 I think the only change you will need to make is the "KAFKA" service in Uppercase because "kafka" in lowercase will not exist as a service. Following is the example which i tested in my cluster and it works pretty fine. # export service=kafka (INCORRECT)
# export service=KAFKA
# curl -iLv -u "admin:admin" -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"_PARSE_.STOP.$service","operation_level":{"level":"SERVICE","cluster_name":"$CLUSTER_NAME","service_name":"$service"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://$HOST:8080/api/v1/clusters/$CLUSTER_NAME/services/$service In order to start KAFKA service. # curl -iLv -u "admin:admin" -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"_PARSE_.START.$service","operation_level":{"level":"SERVICE","cluster_name":"$CLUSTER_NAME","service_name":"$service"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}' http://$HOST:8080/api/v1/clusters/$CLUSTER_NAME/services/$service .
... View more
02-24-2020
12:01 PM
@stryjz In order to turn ON maintenance mode for Host "abcd.example.com" # curl -iLvk -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Turn On Maintenance Mode for host","query":"Hosts/host_name.in(abcd.example.com)"},"Body":{"Hosts":{"maintenance_state":"ON"}}}' ttp://ambariserver.example.com:8080/api/v1/clusters/DemoCluster/hosts In order to turn OFF maintenance mode for Host "abcd.example.com" # curl -iLvk -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Turn On Maintenance Mode for host","query":"Hosts/host_name.in(abcd.example.com)"},"Body":{"Hosts":{"maintenance_state":"OFF"}}}' ttp://ambariserver.example.com:8080/api/v1/clusters/DemoCluster/hosts . Please change the Cluster Name "DemoCluster" and hostnames of ambari server and intended host , credentials in the above above example API calls.
... View more
02-24-2020
11:53 AM
@Prabhu_Muppala As we see that the netstat command shows no oracle port 1521 opened. (no output means Oracle is not running on default listerer port 1521. [cloudera@quickstart ~]$ sudo netstat -tnlpa | grep 1521 . Also the following error indicates that your Oracle Database is not successfully running on "localhost:1521" Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470) So please verify why the oracle DB is not running and configured to use 1521 listener port? Please try to restart Oracle and then recheck if the port 1521 is opened? Or if you have configured to run Oracle on some other port? # netstat -tnlpa | grep $ORACLE_PROCESS_ID .
... View more
02-23-2020
05:13 PM
@pauljoshiva Can you please check what is the port for the "RegistryDNS Bind Port" ? Is it free of being used by some other process ? I am assuming that port is 53 (please change the port in the following command to verify if that port is free os being used)? # netstat -tnlpa | grep 53 If possible then can you try changing the port to something else and then see if that works? And check for "RegistryDNS Bind Port" Ambari UI --> Yarn --> Configs --> Advanced (tab) --> Registry Example: RegistryDNS Bind Port = 1553 Reference Threads: https://community.cloudera.com/t5/Support-Questions/YARN-Registry-DNS-Start-failed-Hortonworks-3/m-p/218794 https://community.cloudera.com/t5/Community-Articles/YARN-REGISTRY-DNS-Port-Conflict-Issue/ta-p/249117
... View more
02-23-2020
05:05 PM
@mike_bronson7 As you are running AMS in distributed mode hence it will be good to see any error appearing in the AMS-Hbase-Master logs first, because of the AMS HMaster process will not run successfully then AMS collector will definitely go down. So we should see all these logs for any errors: /var/log/ambari-metrics-collector/hbase-ams-master-*.log
/var/log/ambari-metrics-collector/hbase-ams-region-*.log
/var/log/ambari-metrics-collector/ambari-metrics-collector.log . After freshly restarting AMS service what is the first error that you see in the "hbase-ams-master-xxx.log" and in "ambari-metrics-collector.log"?
... View more
02-23-2020
04:25 PM
@Prabhu_Muppala Can you please try to specify the the "--driver" param in your Sqoop as following: --driver oracle.jdbc.driver.OracleDriver Also can you pleas everify if the DB credentials are correctly being entered and DB is accessbile on localhost 1521? # netstat -tnlpa | grep 1521 Also this user "hr" has enough privileges to list tables in Oracle DB?
... View more
02-20-2020
08:29 PM
2 Kudos
@mike_bronson7 As we see the error as: "Table Namespace Manager not fully initialized" 2020-02-21 03:33:49,284 INFO org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception, tries=15, retries=35, started=629725 ms ago, cancelled=false, msg=java.io.IOException: Table Namespace Manager not fully initialized, try again later
at org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:2693)
at org.apache.hadoop.hbase.master.HMaster.ensureNamespaceExists(HMaster.java:2915)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1686) Which indicates the AMS HBase master might have some issue. Can you please let us know when was "distributed Mode" AMS running fine last time? Or immediately after enabling AMS distributed mode it is not starting? Is it kerberos enabled environment? Can you please check the permission on the HDFS dir? (to verify if the ownership of this HDFS dir is setup correctly as "ams:hdfs") # su - hdfs -c 'hdfs dfs -ls /user/ams'
# su - hdfs -c 'hdfs dfs -ls /user/ams/hbase' If you still face any issue then may be you can try to change the Zookeeper ZNode for AMS and then try restarting AMS freshly. To change the "Zookeeper Znode Parent" property of AMS please try this. Ambari UI --> Amabri Metrics --> Configs --> "Advanced ams-hbase-site" --> "ZooKeeper Znode Parent" then change the value of the znode to something slightly different like "/ams-hbase-unsecure" to "/ams-hbase-unsecure1" ...etc and restart AMS and let us know if you see any error?
... View more
02-20-2020
08:08 PM
@Ravirakunapu Can you check the "/etc/krb5.conf" file present on the host which is showing the error " kdc host is not reachable on port 88" ? Checks from Client machine: Then verify what is the hostname for KDC mentioned in this file? suppose kdc hostname is "kdc.example.com" then check if you are able to access that hhostname & port from the problematic machine? # telnet kdc.example.com 88
(OR)
# nc -v kdc.example.com 88 . Also please verify if the "/etc/hosts" file is mapped to correct hostname& port for the kdc.example.com? # cat /etc/hosts . On the KDC host Check on the KDC side is the port 88 is listening and iptables/firewall is disabled? # netstat -tnlpa | grep $PID_Of_KDC
# service iptables status
# systemctl status firewalld
... View more
02-20-2020
07:28 PM
2 Kudos
@mike_bronson7 As the Metrics Service operation mode is already selected to "distributed" hence Ambari will make AMS aware that it needs to find that hbase.rootdir on HDFS. Following should be fine. hbase.rootdir=/user/ams/hbase
... View more
02-20-2020
02:47 PM
2 Kudos
@mike_bronson7 This does not look right? Ideally with HDFS HA name we do not use the Port number because "hdfsha" is not a hostname but just a logical name. hbase.rootdir=hdfs://hdfsha:8020/user/ams/hbase If you NameService name is "hdfsha" (defined in "Custom core-site" as "dfs.nameservices=hdfsha") then ideally you should be using the following in your AMS configuration in your "Advanced ams-hbase-site" hbase.rootdir=/user/ams/hbase . As your AMS mode is "distributed" hence AMS will automatically assume that the data is in HDFS and will be able to figure out the actual NameService name dynamically so we do not even need to specify "hdfs://hdfsha" there. After fixing the "hbase.rootdir" in AMS configs please kill and restart the AMS processes. Then specially check the AMS logs specially the following logs and please share if you notice any error ... please share the Full stacktrace. /var/log/ambari-metrics-collector/hbase-ams-master-*.log
/var/log/ambari-metrics-collector/hbase-ams-region-*.log
/var/log/ambari-metrics-collector/ambari-metrics-collector.log .
... View more
02-18-2020
10:19 PM
1 Kudo
@mark-gg As the Empty Base URL issue mentioned on the other thread which you referred as "AMBARI-25069" is already resolved in Ambari 2.7.4 and later so is it not a good idea to first upgrade Ambari to 2.7.4 (or Ambari 2.7.5 which is latest) and then try to register the desired version. Ambari Upgrade Guide: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.4.0/bk_ambari-upgrade-major/content/upgrade_ambari.html
... View more
02-17-2020
04:13 AM
@asmarz Regarding your query : "The problem that it i written that this principle is valid until 17/02/2020. So when I will test again I will get the same error when trying to browse the hdfs" >> Please check the following properties defined inside your "/etc/krb5.conf" and try to adjust those values based on your requirement. # grep -e 'ticket_lifetime\|renew_lifetime' /etc/krb5.conf
renew_lifetime = 7d
ticket_lifetime = 24h . ticket_lifetime: (Time duration string.) Sets the default lifetime for initial ticket requests. The default value is 1 day. renew_lifetime: (Time duration string.) Sets the default renewable lifetime for initial ticket requests. . For any Kerberos ticket , the ' ticket_lifetime ' (usually 1 day) is the time for which that particular ticket is valid. Once the ticket gets invalid, there is an option (kinit -R) to renew it. User can keep renewing her ticket this way till ' renew_lifetime ' time (usually 7 days).
... View more
02-17-2020
04:02 AM
@sharathkumar13 You can use the Cloudera Manager API call something like following to start/stop Kafka Service (or any desired service) In order to stop "kafka" service: # curl -iLv -u admin:admin -X POST --header 'Accept: application/json' 'ttp://dc-1.example.com:7180/api/v40/clusters/TestCluster/services/kafka/commands/stop' In order to start "kafka" Service # curl -iLv -u admin:admin -X POST --header 'Accept: application/json' 'ttp://dc-1.example.com:7180/api/v40/clusters/TestCluster/services/kafka/commands/start' . Please also take a look at the Cloudera Manager Swagger APIs. Cloudera Manager(CM) 6.0 introduces new Python API client cm_client based on Swagger . This new API client supports all CM API versions. https://cloudera.github.io/cm_api/docs/python-client-swagger/ . Here please replace the CM credentials and cluster name in CM host port in the above mentioned API calls.
... View more
02-17-2020
03:26 AM
1 Kudo
@hicha Which product and version (HDP /CDH ....etc) are you using and from where have you downloaded the "incubator-livy" ?
... View more
02-17-2020
03:21 AM
@ARVINDR Can you try to start the NameNode using command line first and then verify if it comes up fine or not? and please verify if the port 8020 / 50070 is opened by NameNode successfully or not? Then try starting other services via ambari. Execute this command on the NameNode host machine(s) to start it manually. # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" Where $HDFS_USER is the HDFS user. For example, hdfs. After starting the namenode using the above command line option please check the NameNode logs (in case of any error observed in the log please share the full stack trace). Please verify if the ports are opened by NameNode properly or not and if firewall is disabled and those ports are accessible from other cluster nodes? # netstat -tnlpa | brep 50070
# netstat -tnlpa | brep 8020 . From DataNode host please verify the Host/Port access (and disable iptables/firewall if running) # nc -v itxxxxxxxxxxx01.yyyy.com 8020
(OR)
# telnet itxxxxxxxxxxx01.yyyy.com 8020 .
... View more
02-16-2020
10:43 PM
@gcl8775 Yes, Ambari allows more than 2 NameNodes via HDFS Federation. An HDFS federation allows you to scale a cluster horizontally by configuring multiple namespaces and NameNodes. https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_configure_federation.html
... View more
02-16-2020
10:32 PM
@Kureikana Can you try this Suppose you start your infra solr process as "infra-solr" user then try the following command. Non Kerberos Env: # su - infra-solr
# source /etc/ambari-infra-solr/conf/infra-solr-env.sh
# /usr/lib/ambari-infra-solr/bin/solr start -cloud -noprompt -s /var/lib/ambari-infra-solr/data 2>&1 Kerberos Env: # su - infra-solr
# kinit -kt /etc/security/keytabs/ambari-infra-solr.service.keytab <AMBARI_INFRA_PRINCIPAL>
# source /etc/ambari-infra-solr/conf/infra-solr-env.sh
# /usr/lib/ambari-infra-solr/bin/solr start -cloud -noprompt -s /var/lib/ambari-infra-solr/data -Dsolr.kerberos.name.rules='DEFAULT' 2>&1 .
... View more
02-15-2020
03:38 PM
1 Kudo
@hicha Not Sure which product you are using. However based on the command it looks like you are running the "livy-server" script from outside of the "bin" directory which may be causing the issue Because the Logic written inside the script "" is as following: export LIVY_HOME=$(cd $(dirname $0)/.. && pwd)
.
.
.
start_livy_server() {
LIBDIR="$LIVY_HOME/jars"
if [ ! -d "$LIBDIR" ]; then
LIBDIR="$LIVY_HOME/server/target/jars"
fi
if [ ! -d "$LIBDIR" ]; then
echo "Could not find Livy jars directory." 1>&2
exit 1
fi . So ideally you should first change directory to "bin" where the ' livy-server' script is present and then run it as following OR try using the Full path in the. terminal for the 'livy-server' script # cd /PATH/TO/LIVY_DIR/bin
# ./livy-server
(OR)
# /PATH/TO/LIVY_DIR/bin/livy-server . If you still find it difficult to run then try to put a "echo" statement as following inside the script "livy-server' to see what path it is resolving for the 'LIVY_HOME' and if the "LIVY_HOME/jars" directory exist with correct permission?? export LIVY_HOME=$(cd $(dirname $0)/.. && pwd)
echo "LIVY_HOME calculated as = $LIVY_HOME" .
... View more
02-13-2020
02:23 PM
@AarifAkhter We see the error is cause by some DB access/connect Caused by: java.lang.RuntimeException: Error while creating database accessor
at org.apache.ambari.server.orm.DBAccessorImpl.<init>(DBAccessorImpl.java:120) So few queries: 1. Is this a newly setup ambari server? 2. Do you see the DataBase is running fine and the DB host/port is accessible ? 3. Are you using Embedded Postgres Database? 4. When was last time ambari server running fine? Any recent chances made to ambari server config/host ? I am suspecting that there should be some more detailed log like some more "Caused By" section in your ambari-server.log which is not copied fully in your last update. So can you please recheck your ambari log and let us know what was the first error and if you can share the Full stack trace. some times there are more than on "Caused By" section in a single stack trace.
... View more
02-13-2020
02:04 PM
@AarifAkhter As the failure message says : Please check the logs for more information. So in order to find out the cause of failure we will need to look at ambari server logs for more detailed messages. Hence please share the following files for initial investigation.. /var/log/ambari-server/ambari-server.log
/var/log/ambari-server/ambari-server.out .
... View more
02-11-2020
11:39 PM
@TR7_BRYLE As requested earlier - If you still face any issue then can you please share the "ambari-agent.log" freshly after restarting it ?
... View more
02-11-2020
05:40 PM
@TR7_BRYLE The error is actually due to timeout (and not because of port access) SSLError('The read operation timed out',) Above error indicates that communication further like reading a response is timing out. So we will have to first check why the "https" request is being timed out. We can try using the following kind of simple Python script to simulate what agent actually tries. Ambari agent is a python utility which tries to connect to ambari server a d tries to register itself and sends heartbeat messages to ambari server. So we can test the following script from the agent host to see if it is able to connect or if that is also getting timed out. We are using 'httplib' to test the access and Https communication. # cat /tmp/SSL/ssl_test.py
import httplib
import ssl
if __name__ == "__main__":
ca_connection = httplib.HTTPSConnection('kerlatest1.example.com:8440', timeout=5, context=ssl._create_unverified_context())
ca_connection.request("GET", '/connection_info')
response = ca_connection.getresponse()
print response.status
data = response.read()
print str(data) Run it like following: # export PYTHONPATH=/usr/lib/ambari-agent/lib:/usr/lib/ambari-agent/lib/ambari_agent:$PYTHONPATH
# python /tmp/SSL/ssl_test.py If above works fine and it returns 200 and returns result like following: # python /tmp/SSL/ssl_test.py
200
{"security.server.two_way_ssl":"false"} If you notice any HTTPS communitation or certificat related error then you might want to refer to the following article and according to your Ambari version please check if you have following defined in your ambari-agent.ini file "[security]" section? [security]
force_https_protocol=PROTOCOL_TLSv1_2 - If you still face any issue then can you please share the "ambari-agent.log" freshly after restarting it ? Reference Article: Java/Python Updates and Ambari Agent TLS Settings https://community.cloudera.com/t5/Community-Articles/Java-Python-Updates-and-Ambari-Agent-TLS-Settings/ta-p/248328 . .
... View more
02-10-2020
03:57 PM
@asmarz In order to enable SSL for various component you can refer to individual component docs. following are some references: 1). Enabling HTTPS for Grafana & AMS https://www.youtube.com/watch?v=dSH_9N94c4c https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/using-ambari-core-services/content/amb_set_up_https_for_grafana.html https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/using-ambari-core-services/content/amb_set_up_https_for_ams.html 2). Enabling HTTPS for AmbariServer https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/configuring-wire-encryption/content/set_up_ssl_for_ambari.html 3). Enabling HTTPS for HDFS https://community.cloudera.com/t5/Community-Articles/Enable-HTTPS-for-HDFS/ta-p/247181 4). Enabling HTTPS for various HDP services: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/configuring-wire-encryption/content/enabling_ssl_for_hdp_components.html . . If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. For different queries it is better to open new thread that way the responses are more organised.
... View more
02-10-2020
12:45 PM
@asmarz You might have enabled kerberos authentication for your cluster components. However, in order to secure the Web UIs offered by these components you will also need to enable the "SPNEGO Authentication". By default, access to the HTTP-based services and UIs for the cluster are not configured to require authentication. Kerberos authentication can be configured for the Web UIs for HDFS, YARN, MapReduce2, HBase, Oozie, Falcon and Storm. Please see [1] & [2] 1. Create a secret key used for signing authentication tokens. dd if=/dev/urandom of=/etc/security/http_secret bs=1024 count=1
chown hdfs:hadoop /etc/security/http_secret
chmod 440 /etc/security/http_secret 2. Add additional properties for http authentication. Example: in Advanced core-site: hadoop.http.authentication.simple.anonymous.allowed =false
hadoop.http.authentication.signature.secret.file = /etc/security/http_secret
hadoop.http.authentication.type = kerberos
hadoop.http.authentication.kerberos.keytab = /etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal = HTTP/_HOST@ EXAMPLE.COM
hadoop.http.filter.initializers = org.apache.hadoop.security.AuthenticationFilterInitializer
hadoop.http.authentication.cookie.domain = hortonworks.local Once that is done then you will not be able to access those UIs without having a valid kerberos ticket. You will need to configure your web browser as mentioned in [3] in order to securely access those SPNEGO enabled component UIs. - Similarly the following doc tells about how to enable HTTP Authentication for Ambari [4] # ambari-server setup-kerberos
Using python /usr/bin/python
Setting up Kerberos authentication
Enable Kerberos authentication [true|false] (false): true [1] https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/authentication-with-kerberos/content/authe_spnego_enabling_spnego_authentication_for_hadoop.html [2] https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/authentication-with-kerberos/content/authe_spnego_configuring_http_authentication_for_hdfs_yarn_mapreduce2_hbase_oozie_falcon_and_storm.html [3] https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/authentication-with-kerberos/content/authe_spnego_enabling_browser_access_to_a_spnego_enabled_web_ui.html [4] https://docs.cloudera.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-security/content/configuring_ambari_server_for_kerberos_authentication.html
... View more
02-05-2020
11:03 PM
@Farid You can find various training offerings like "Private, Blended Learning, Classroom, Virtual Classroom, OnDemand" by Cloudera here: https://www.cloudera.com/about/training.html https://www.cloudera.com/about/training/course-listing.html#?course=all . Additionally for detailed training related queries please refer to this page: https://www.cloudera.com/contact-sales.html Training Inquiries Email: training-admin@cloudera.com
... View more
02-05-2020
11:01 PM
@MortyCodes Just to confirm .. you want to pass a Python program file name to the "ExecuteStreamCommand" Processor? ExecuteStreamCommand Properties. Command Arguments: /Jay/NifiDemo/test_python.py
Command Path: /bin/python
Argument Delimiter: ; I tried with the above approach and i can see the script is getting executed fine. Is that something what you are looking out for? .
... View more
02-05-2020
10:33 PM
@gvbnn Those are just warnings you can ignore them ... it should not be causing Job failure. As we also see that the application_id "application_1580968178673_0001" is also generated ... So you should be able to check the status of your Yarn application in ResourceManagr UI. http://$RM_ADDRESS:8088/cluster/apps If your cluster has enough resources then you should see the progress as well for your application_id ...
... View more
02-05-2020
10:24 PM
@gvbnn Those are just WARNING messages. Where do you see the errors? Can you please share more details if you are noticing any error?
... View more
02-05-2020
02:20 PM
@S_Waseem I tried to insert the following statement to my hive table customer using your approach and it worked with a slight modification in your command. Insert into customer values (5000, "CustFive", "BRN"); So can you please check and compare it with your command? Changes: Using the sudo approach which you mentioned by supplying the username and password to beeline using -n and -p options as following. The values in quotation mark were changed from "CustFive" to \"CustFive\" as they are surrounded by the -c " statement" Example Output: [root@newhwx1 ~]# export user_hive=hive
[root@newhwx1 ~]# echo ${user_hive}
hive
[root@newhwx1 ~]# sudo su - ${user_hive} -c "beeline -n hive -p hive -u 'jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' -e 'insert into customer values (5000, \"CustFive\", \"BRN\")'"
Connecting to jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connected to: Apache Hive (version 1.2.1000.2.6.5.0-292)
Driver: Hive JDBC (version 1.2.1000.2.6.5.0-292)
Transaction isolation: TRANSACTION_REPEATABLE_READ
INFO : Tez session hasn't been created yet. Opening session
INFO : Dag name: insert into customer values (5000, ..."BRN")(Stage-1)
INFO : Status: Running (Executing on YARN cluster with App id application_1579040432494_27933)
INFO : Loading data to table default.customer from hdfs://My-NN-HA/apps/hive/warehouse/customer/.hive-staging_hive_2020-02-05_22-12-33_107_183476754828623767-2/-ext-10000
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 1 1 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 01/01 [==========================>>] 100% ELAPSED TIME: 6.46 s
--------------------------------------------------------------------------------
INFO : Table default.customer stats: [numFiles=5, numRows=5, totalSize=90, rawDataSize=85]
No rows affected (22.328 seconds)
Beeline version 1.2.1000.2.6.5.0-292 by Apache Hive
Closing: 0: jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 Output of "select * from customer" later. [root@newhwx1 ~]# sudo su - ${user_hive} -c "beeline -n hive -p hive -u 'jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' -e 'select * from customer'"
Connecting to jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connected to: Apache Hive (version 1.2.1000.2.6.5.0-292)
Driver: Hive JDBC (version 1.2.1000.2.6.5.0-292)
Transaction isolation: TRANSACTION_REPEATABLE_READ
+------------------+--------------------+----------------+--+
| customer.custid | customer.custname | customer.city |
+------------------+--------------------+----------------+--+
| 1000 | CustOne | BLR |
| 2000 | CustTwo | PUNE |
| 3000 | CustThree | HYD |
| 4000 | CustFour | NSW |
| 5000 | CustFive | BRN |
+------------------+--------------------+----------------+--+
5 rows selected (1.108 seconds)
Beeline version 1.2.1000.2.6.5.0-292 by Apache Hive
Closing: 0: jdbc:hive2://newhwx1.example.com:2181,newhwx2.example.com:2181,newhwx3.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 . .
... View more
02-04-2020
11:25 PM
@Quang_Vu_Blog As per kafka-connect docs the default port " rest.port" is 8083 rest.port - the port the REST interface listens on for HTTP requests So are you getting conflict on port 8003 (or there is a typo ? is it 8083) Can you try changing the " rest.port" in your worker config to something else and then try again. Also please try to run the below commands before starting kafka-connect to verify if there is any port conflict? Or if there are any bind address issue # netstat -tnlpa | grep 8083
# netstat -tnlpa | grep 8003 .
... View more