Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

CLoudera agent not working

New Contributor

CLOUDERA AGENT LOG

 

ON HOST-spark01

 

[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO ================================================================================
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO SCM Agent Version: 5.16.1
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Agent Protocol Version: 4
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Using Host ID: cfcd2918-fbc2-4a19-9632-47dd41e01d8a
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Using directory: /run/cloudera-scm-agent
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Using supervisor binary path: /usr/lib/cmf/agent/build/env/bin/supervisord
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Neither verify_cert_file nor verify_cert_dir are configured. Not performing validation of server certificates in HTTPS communication. These options can be configured in this agent's config.ini file to enable certificate validation.
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Agent Logging Level: INFO
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO No command line vars
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Missing database jar: /usr/share/java/mysql-connector-java.jar (normal, if you're not using this database type)
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Missing database jar: /usr/share/java/oracle-connector-java.jar (normal, if you're not using this database type)
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Found database jar: /usr/share/cmf/lib/postgresql-42.1.4.jre7.jar
[12/Dec/2018 18:29:58 +0000] 18290 MainThread agent INFO Agent starting as pid 18290 user root(0) group root(0).
[12/Dec/2018 18:30:00 +0000] 18290 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent/cgroups
[12/Dec/2018 18:30:00 +0000] 18290 MainThread cgroups INFO Found cgroups subsystem: cpu
[12/Dec/2018 18:30:00 +0000] 18290 MainThread cgroups INFO cgroup pseudofile /tmp/tmpfbPMgV/cpu.rt_runtime_us does not exist, skipping
[12/Dec/2018 18:30:00 +0000] 18290 MainThread cgroups INFO Found cgroups subsystem: cpuacct
[12/Dec/2018 18:30:00 +0000] 18290 MainThread cgroups INFO Found cgroups subsystem: blkio
[12/Dec/2018 18:30:00 +0000] 18290 MainThread cgroups INFO Found cgroups subsystem: memory
[12/Dec/2018 18:30:01 +0000] 18290 MainThread cgroups INFO Reusing /run/cloudera-scm-agent/cgroups/memory
[12/Dec/2018 18:30:01 +0000] 18290 MainThread cgroups INFO Reusing /run/cloudera-scm-agent/cgroups/cpu
[12/Dec/2018 18:30:01 +0000] 18290 MainThread cgroups INFO Reusing /run/cloudera-scm-agent/cgroups/cpuacct
[12/Dec/2018 18:30:01 +0000] 18290 MainThread cgroups INFO Reusing /run/cloudera-scm-agent/cgroups/blkio
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Found cgroups capabilities: {'has_memory': True, 'default_memory_limit_in_bytes': 8796093022207, 'default_memory_soft_limit_in_bytes': 8796093022207, 'writable_cgroup_dot_procs': True, 'default_cpu_rt_runtime_us': -1, 'has_cpu': True, 'default_blkio_weight': 1000, 'default_cpu_shares': 1024, 'has_cpuacct': True, 'has_blkio': True}
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Setting up supervisord event monitor.
[12/Dec/2018 18:30:01 +0000] 18290 MainThread filesystem_map INFO Monitored nodev filesystem types: ['nfs', 'nfs4', 'tmpfs']
[12/Dec/2018 18:30:01 +0000] 18290 MainThread filesystem_map INFO Using timeout of 2.000000
[12/Dec/2018 18:30:01 +0000] 18290 MainThread filesystem_map INFO Using join timeout of 0.100000
[12/Dec/2018 18:30:01 +0000] 18290 MainThread filesystem_map INFO Using tolerance of 60.000000
[12/Dec/2018 18:30:01 +0000] 18290 MainThread filesystem_map INFO Local filesystem types whitelist: ['ext2', 'ext3', 'ext4', 'xfs']
[12/Dec/2018 18:30:01 +0000] 18290 MainThread kt_renewer INFO Agent wide credential cache set to /run/cloudera-scm-agent/krb5cc_cm_agent_0
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Using metrics_url_timeout_seconds of 30.000000
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Using task_metrics_timeout_seconds of 5.000000
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Using max_collection_wait_seconds of 10.000000
[12/Dec/2018 18:30:01 +0000] 18290 MainThread metrics INFO Importing tasktracker metric schema from file /usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.16.1-py2.7.egg/cmf/monitor/tasktracker/schema.json
[12/Dec/2018 18:30:01 +0000] 18290 MainThread ntp_monitor INFO Using timeout of 2.000000
[12/Dec/2018 18:30:01 +0000] 18290 MainThread dns_names INFO Using timeout of 30.000000
[12/Dec/2018 18:30:01 +0000] 18290 MainThread __init__ INFO Created DNS monitor.
[12/Dec/2018 18:30:01 +0000] 18290 MainThread stacks_collection_manager INFO Using max_uncompressed_file_size_bytes: 5242880
[12/Dec/2018 18:30:01 +0000] 18290 MainThread __init__ INFO Importing metric schema from file /usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.16.1-py2.7.egg/cmf/monitor/schema.json
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Supervised processes will add the following to their environment (in addition to the supervisor's env): {'CDH_PARQUET_HOME': '/usr/lib/parquet', 'JSVC_HOME': '/usr/libexec/bigtop-utils', 'CMF_PACKAGE_DIR': '/usr/lib/cmf/service', 'CDH_HADOOP_BIN': '/usr/bin/hadoop', 'MGMT_HOME': '/usr/share/cmf', 'CDH_IMPALA_HOME': '/usr/lib/impala', 'CDH_YARN_HOME': '/usr/lib/hadoop-yarn', 'CDH_HDFS_HOME': '/usr/lib/hadoop-hdfs', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'CDH_HUE_PLUGINS_HOME': '/usr/lib/hadoop', 'CM_STATUS_CODES': u'STATUS_NONE HDFS_DFS_DIR_NOT_EMPTY HBASE_TABLE_DISABLED HBASE_TABLE_ENABLED JOBTRACKER_IN_STANDBY_MODE YARN_RM_IN_STANDBY_MODE', 'KEYTRUSTEE_KP_HOME': '/usr/share/keytrustee-keyprovider', 'CDH_KUDU_HOME': '/usr/lib/kudu', 'CLOUDERA_ORACLE_CONNECTOR_JAR': '/usr/share/java/oracle-connector-java.jar', 'CDH_SQOOP2_HOME': '/usr/lib/sqoop2', 'KEYTRUSTEE_SERVER_HOME': '/usr/lib/keytrustee-server', 'CDH_MR2_HOME': '/usr/lib/hadoop-mapreduce', 'HIVE_DEFAULT_XML': '/etc/hive/conf.dist/hive-default.xml', 'CLOUDERA_POSTGRESQL_JDBC_JAR': '/usr/share/cmf/lib/postgresql-42.1.4.jre7.jar', 'CDH_KMS_HOME': '/usr/lib/hadoop-kms', 'CDH_HBASE_HOME': '/usr/lib/hbase', 'CDH_SQOOP_HOME': '/usr/lib/sqoop', 'WEBHCAT_DEFAULT_XML': '/etc/hive-webhcat/conf.dist/webhcat-default.xml', 'CDH_OOZIE_HOME': '/usr/lib/oozie', 'CDH_ZOOKEEPER_HOME': '/usr/lib/zookeeper', 'CDH_HUE_HOME': '/usr/lib/hue', 'CLOUDERA_MYSQL_CONNECTOR_JAR': '/usr/share/java/mysql-connector-java.jar', 'CDH_HBASE_INDEXER_HOME': '/usr/lib/hbase-solr', 'CDH_MR1_HOME': '/usr/lib/hadoop-0.20-mapreduce', 'CDH_SOLR_HOME': '/usr/lib/solr', 'CDH_PIG_HOME': '/usr/lib/pig', 'CDH_SENTRY_HOME': '/usr/lib/sentry', 'CDH_CRUNCH_HOME': '/usr/lib/crunch', 'CDH_LLAMA_HOME': '/usr/lib/llama/', 'CDH_HTTPFS_HOME': '/usr/lib/hadoop-httpfs', 'CDH_HADOOP_HOME': '/usr/lib/hadoop', 'CDH_HIVE_HOME': '/usr/lib/hive', 'ORACLE_HOME': '/usr/share/oracle/instantclient', 'CDH_HCAT_HOME': '/usr/lib/hive-hcatalog', 'CDH_KAFKA_HOME': '/usr/lib/kafka', 'CDH_SPARK_HOME': '/usr/lib/spark', 'TOMCAT_HOME': '/usr/lib/bigtop-tomcat', 'CDH_FLUME_HOME': '/usr/lib/flume-ng'}
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/config.ini. Environment variables for CDH locations are not used when CDH is installed from parcels.
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent/process
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent/supervisor
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent/flood
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent/supervisor/include
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Supervisor version: 3.0, pid: 16060
[12/Dec/2018 18:30:01 +0000] 18290 MainThread agent INFO Connecting to previous supervisor: agent-16019-1544598727.
[12/Dec/2018 18:30:01 +0000] 18290 MainThread status_server INFO Using maximum impala profile bundle size of 1073741824 bytes.
[12/Dec/2018 18:30:01 +0000] 18290 MainThread status_server INFO Using maximum stacks log bundle size of 1073741824 bytes.
[12/Dec/2018 18:30:01 +0000] 18290 MainThread _cplogging INFO [12/Dec/2018:18:30:01] ENGINE Bus STARTING
[12/Dec/2018 18:30:01 +0000] 18290 MainThread _cplogging INFO [12/Dec/2018:18:30:01] ENGINE Started monitor thread '_TimeoutMonitor'.
[12/Dec/2018 18:30:01 +0000] 18290 HTTPServer Thread-2 _cplogging ERROR [12/Dec/2018:18:30:01] ENGINE Error in HTTP server: shutting down
Traceback (most recent call last):
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/CherryPy-3.2.2-py2.7.egg/cherrypy/process/servers.py", line 187, in _start_http_thread
self.httpserver.start()
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/CherryPy-3.2.2-py2.7.egg/cherrypy/wsgiserver/wsgiserver2.py", line 1825, in start
raise socket.error(msg)
error: No socket could be created on ('spark01', 9000) -- [Errno 99] Cannot assign requested address

[12/Dec/2018 18:30:01 +0000] 18290 HTTPServer Thread-2 _cplogging INFO [12/Dec/2018:18:30:01] ENGINE Bus STOPPING
[12/Dec/2018 18:30:01 +0000] 18290 HTTPServer Thread-2 _cplogging INFO [12/Dec/2018:18:30:01] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('spark01', 9000)) already shut down
[12/Dec/2018 18:30:01 +0000] 18290 HTTPServer Thread-2 _cplogging INFO [12/Dec/2018:18:30:01] ENGINE Stopped thread '_TimeoutMonitor'.
[12/Dec/2018 18:30:01 +0000] 18290 HTTPServer Thread-2 _cplogging INFO [12/Dec/2018:18:30:01] ENGINE Bus STOPPED
[12/Dec/2018 18:30:01 +0000] 18290 HTTPServer Thread-2 _cplogging INFO [12/Dec/2018:18:30:01] ENGINE Bus EXITING
[12/Dec/2018 18:30:01 +0000] 18290 HTTPServer Thread-2 _cplogging INFO [12/Dec/2018:18:30:01] ENGINE Bus EXITED

=========================================

@spark02:/var/log/cloudera-scm-agent# cat /etc/hosts
127.0.0.1 localhost loopback
127.0.1.1 spark02.local spark02


# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.1.180 spark01.local spark01
192.168.1.181 spark02.local spark02
192.168.1.182 spark03.local spark03

 ======================================

 

spark02:/var/log/cloudera-scm-agent# cat /etc/cloudera-scm-agent/config.ini
[General]
# Hostname of the CM server.
server_host=192.168.1.180

# Port that the CM server is listening on.
server_port=7182

## It should not normally be necessary to modify these.
# Port that the CM agent should listen on.
listening_port=9000

# IP Address that the CM agent should listen on.
listening_ip=192.168.1.181

# Hostname that the CM agent reports as its hostname. If unset, will be
# obtained in code through something like this:
#
# python -c 'import socket; \
# print socket.getfqdn(), \
# socket.gethostbyname(socket.getfqdn())'
#
listening_hostname=spark01

# An alternate hostname to report as the hostname for this host in CM.
# Useful when this agent is behind a load balancer or proxy and all
# inbound communication must connect through that proxy.
# reported_hostname=

# Port that supervisord should listen on.
# NB: This only takes effect if supervisord is restarted.
# supervisord_port=19001

# Log file. The supervisord log file will be placed into
# the same directory. Note that if the agent is being started via the
# init.d script, /var/log/cloudera-scm-agent/cloudera-scm-agent.out will
# also have a small amount of output (from before logging is initialized).
# log_file=/var/log/cloudera-scm-agent/cloudera-scm-agent.log

# Persistent state directory. Directory to store CM agent state that
# persists across instances of the agent process and system reboots.
# Particularly, the agent's UUID is stored here.
# lib_dir=/var/lib/cloudera-scm-agent

# Parcel directory. Unpacked parcels will be stored in this directory.
# Downloaded parcels will be stored in <parcel_dir>/../parcel-cache
# parcel_dir=/opt/cloudera/parcels

# Enable supervisord event monitoring. Used in eager heartbeating, amongst
# other things.
# enable_supervisord_events=true

# Maximum time to wait (in seconds) for all metric collectors to finish
# collecting data.
max_collection_wait_seconds=10.0

# Maximum time to wait (in seconds) when connecting to a local role's
# webserver to fetch metrics.
metrics_url_timeout_seconds=30.0

# Maximum time to wait (in seconds) when connecting to a local TaskTracker
# to fetch task attempt data.
task_metrics_timeout_seconds=5.0

# The list of non-device (nodev) filesystem types which will be monitored.
monitored_nodev_filesystem_types=nfs,nfs4,tmpfs

# The list of filesystem types which are considered local for monitoring purpose s.
# These filesystems are combined with the other local filesystem types found in
# /proc/filesystems
local_filesystem_whitelist=ext2,ext3,ext4,xfs

# The largest size impala profile log bundle that this agent will serve to the
# CM server. If the CM server requests more than this amount, the bundle will
# be limited to this size. All instances of this limit being hit are logged to
# the agent log.
impala_profile_bundle_max_bytes=1073741824

# The largest size stacks log bundle that this agent will serve to the CM
# server. If the CM server requests more than this amount, the bundle will be
# limited to this size. All instances of this limit being hit are logged to the
# agent log.
stacks_log_bundle_max_bytes=1073741824

# The size to which the uncompressed portion of a stacks log can grow before it
# is rotated. The log will then be compressed during rotation.
stacks_log_max_uncompressed_file_size_bytes=5242880

# The orphan process directory staleness threshold. If a diretory is more stale
# than this amount of seconds, CM agent will remove it.
orphan_process_dir_staleness_threshold=5184000

# The orphan process directory refresh interval. The CM agent will check the
# staleness of the orphan processes config directory every this amount of
# seconds.
orphan_process_dir_refresh_interval=3600

# A knob to control the agent logging level. The options are listed as follows:
# 1) DEBUG (set the agent logging level to 'logging.DEBUG')
# 2) INFO (set the agent logging level to 'logging.INFO')
scm_debug=INFO

# The DNS resolution collecion interval in seconds. A java base test program
# will be executed with at most this frequency to collect java DNS resolution
# metrics. The test program is only executed if the associated health test,
# Host DNS Resolution, is enabled.
dns_resolution_collection_interval_seconds=60

# The maximum time to wait (in seconds) for the java test program to collect
# java DNS resolution metrics.
dns_resolution_collection_timeout_seconds=30

# The directory location in which the agent-wide kerberos credential cache
# will be created.
# agent_wide_credential_cache_location=/var/run/cloudera-scm-agent

[Security]
# Use TLS and certificate validation when connecting to the CM server.
use_tls=0

# The maximum allowed depth of the certificate chain returned by the peer.
# The default value of 9 matches the default specified in openssl's
# SSL_CTX_set_verify.
max_cert_depth=9

# A file of CA certificates in PEM format. The file can contain several CA
# certificates identified by
#
# -----BEGIN CERTIFICATE-----
# ... (CA certificate in base64 encoding) ...
# -----END CERTIFICATE-----
#
# sequences. Before, between, and after the certificates text is allowed which
# can be used e.g. for descriptions of the certificates.
#
# The file is loaded once, the first time an HTTPS connection is attempted. A
# restart of the agent is required to pick up changes to the file.
#
# Note that if neither verify_cert_file or verify_cert_dir is set, certificate
# verification will not be performed.
# verify_cert_file=

# Directory containing CA certificates in PEM format. The files each contain one
# CA certificate. The files are looked up by the CA subject name hash value,
# which must hence be available. If more than one CA certificate with the same
# name hash value exist, the extension must be different (e.g. 9d66eef0.0,
# 9d66eef0.1 etc). The search is performed in the ordering of the extension
# number, regardless of other properties of the certificates. Use the c_rehash
# utility to create the necessary links.
#
# The certificates in the directory are only looked up when required, e.g. when
# building the certificate chain or when actually performing the verification
# of a peer certificate. The contents of the directory can thus be changed
# without an agent restart.
#
# When looking up CA certificates, the verify_cert_file is first searched, then
# those in the directory. Certificate matching is done based on the subject name ,
# the key identifier (if present), and the serial number as taken from the
# certificate to be verified. If these data do not match, the next certificate
# will be tried. If a first certificate matching the parameters is found, the
# verification process will be performed; no other certificates for the same
# parameters will be searched in case of failure.
#
# Note that if neither verify_cert_file or verify_cert_dir is set, certificate
# verification will not be performed.
# verify_cert_dir=

# PEM file containing client private key.
# client_key_file=

# A command to run which returns the client private key password on stdout
# client_keypw_cmd=

# If client_keypw_cmd isn't specified, instead a text file containing
# the client private key password can be used.
# client_keypw_file=

# PEM file containing client certificate.
# client_cert_file=

## Location of Hadoop files. These are the CDH locations when installed by
## packages. Unused when CDH is installed by parcels.
[Hadoop]
#cdh_crunch_home=/usr/lib/crunch
#cdh_flume_home=/usr/lib/flume-ng
#cdh_hadoop_bin=/usr/bin/hadoop
#cdh_hadoop_home=/usr/lib/hadoop
#cdh_hbase_home=/usr/lib/hbase
#cdh_hbase_indexer_home=/usr/lib/hbase-solr
#cdh_hcat_home=/usr/lib/hive-hcatalog
#cdh_hdfs_home=/usr/lib/hadoop-hdfs
#cdh_hive_home=/usr/lib/hive
#cdh_httpfs_home=/usr/lib/hadoop-httpfs
#cdh_hue_home=/usr/share/hue
#cdh_hue_plugins_home=/usr/lib/hadoop
#cdh_impala_home=/usr/lib/impala
#cdh_llama_home=/usr/lib/llama
#cdh_mr1_home=/usr/lib/hadoop-0.20-mapreduce
#cdh_mr2_home=/usr/lib/hadoop-mapreduce
#cdh_oozie_home=/usr/lib/oozie
#cdh_parquet_home=/usr/lib/parquet
#cdh_pig_home=/usr/lib/pig
#cdh_solr_home=/usr/lib/solr
#cdh_spark_home=/usr/lib/spark
#cdh_sqoop_home=/usr/lib/sqoop
#cdh_sqoop2_home=/usr/lib/sqoop2
#cdh_yarn_home=/usr/lib/hadoop-yarn
#cdh_zookeeper_home=/usr/lib/zookeeper
#hive_default_xml=/etc/hive/conf.dist/hive-default.xml
#webhcat_default_xml=/etc/hive-webhcat/conf.dist/webhcat-default.xml
#jsvc_home=/usr/libexec/bigtop-utils
#tomcat_home=/usr/lib/bigtop-tomcat
#oracle_home=/usr/share/oracle/instantclient

## Location of Cloudera Management Services files.
[Cloudera]
#mgmt_home=/usr/share/cmf

## Location of JDBC Drivers.
[JDBC]
#cloudera_mysql_connector_jar=/usr/share/java/mysql-connector-java.jar
#cloudera_oracle_connector_jar=/usr/share/java/oracle-connector-java.jar
#By default, postgres jar is found dynamically in $MGMT_HOME/lib
#cloudera_postgresql_jdbc_jar=

 

 

 

 

SERVER MANGER SERVER 

 

spark01@spark01:~$ sudo netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN 14656/java
tcp 0 0 0.0.0.0:7182 0.0.0.0:* LISTEN 14656/java
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 13857/rpcbind
tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN 741/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 8762/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 3032/cupsd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 5258/postgres
tcp 0 0 127.0.0.1:19001 0.0.0.0:* LISTEN 14098/python
tcp 0 0 127.0.1.1:9000 0.0.0.0:* LISTEN 17059/python2.7
tcp 0 0 0.0.0.0:7432 0.0.0.0:* LISTEN 14612/postgres
tcp6 0 0 :::111 :::* LISTEN 13857/rpcbind
tcp6 0 0 :::80 :::* LISTEN 13708/apache2
tcp6 0 0 :::22 :::* LISTEN 8762/sshd
tcp6 0 0 ::1:631 :::* LISTEN 3032/cupsd
tcp6 0 0 :::7432 :::* LISTEN 14612/postgres
udp 0 0 0.0.0.0:52480 0.0.0.0:* 2042/firefox
udp 0 0 0.0.0.0:36212 0.0.0.0:* 735/dhclient
udp 0 0 0.0.0.0:52909 0.0.0.0:* 699/avahi-daemon: r
udp 0 0 0.0.0.0:5353 0.0.0.0:* 699/avahi-daemon: r
udp 0 0 127.0.1.1:53 0.0.0.0:* 741/dnsmasq
udp 0 0 0.0.0.0:68 0.0.0.0:* 735/dhclient
udp 0 0 0.0.0.0:111 0.0.0.0:* 13857/rpcbind
udp 0 0 0.0.0.0:631 0.0.0.0:* 1070/cups-browsed
udp 0 0 0.0.0.0:888 0.0.0.0:* 13857/rpcbind
udp6 0 0 :::42428 :::* 2042/firefox
udp6 0 0 :::44736 :::* 699/avahi-daemon: r
udp6 0 0 :::5353 :::* 699/avahi-daemon: r
udp6 0 0 :::47567 :::* 735/dhclient
udp6 0 0 :::111 :::* 13857/rpcbind
udp6 0 0 :::888 :::* 13857/rpcbind

 

 

==============

 

 

spark01:~$ cat /etc/hosts
127.0.0.1 localhost loopback
127.0.1.1 spark01.local spark01


# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.1.180 spark01.local spark01
192.168.1.181 spark02.local spark02
192.168.1.182 spark03.local spark03

 

 

 

=====================

========================

spark01:~$ cat /etc/cloudera-scm-agent/config.ini
[General]
# Hostname of the CM server.
server_host=192.168.1.180
# Port that the CM server is listening on.
server_port=7182
## It should not normally be necessary to modify these.
# Port that the CM agent should listen on.
listening_port=9000
# IP Address that the CM agent should listen on.
listening_ip=
# Hostname that the CM agent reports as its hostname. If unset, will be
# obtained in code through something like this:
#
# python -c 'import socket; \
# print socket.getfqdn(), \
# socket.gethostbyname(socket.getfqdn())'
#
listening_hostname=spark01
# An alternate hostname to report as the hostname for this host in CM.
# Useful when this agent is behind a load balancer or proxy and all
# inbound communication must connect through that proxy.
# reported_hostname= Port that supervisord should listen on. NB: This
# only takes effect if supervisord is restarted. supervisord_port=19001
# Log file. The supervisord log file will be placed into the same
# directory. Note that if the agent is being started via the init.d
# script, /var/log/cloudera-scm-agent/cloudera-scm-agent.out will also
# have a small amount of output (from before logging is initialized).
# log_file=/var/log/cloudera-scm-agent/cloudera-scm-agent.log Persistent
# state directory. Directory to store CM agent state that persists
# across instances of the agent process and system reboots. Particularly,
# the agent's UUID is stored here. lib_dir=/var/lib/cloudera-scm-agent
# Parcel directory. Unpacked parcels will be stored in this directory.
# Downloaded parcels will be stored in <parcel_dir>/../parcel-cache
# parcel_dir=/opt/cloudera/parcels Enable supervisord event monitoring.
# Used in eager heartbeating, amongst other things.
# enable_supervisord_events=true Maximum time to wait (in seconds) for
# all metric collectors to finish collecting data.
max_collection_wait_seconds=10.0
# Maximum time to wait (in seconds) when connecting to a local role's
# webserver to fetch metrics.
metrics_url_timeout_seconds=30.0
# Maximum time to wait (in seconds) when connecting to a local
# TaskTracker to fetch task attempt data.
task_metrics_timeout_seconds=5.0
# The list of non-device (nodev) filesystem types which will be
# monitored.
monitored_nodev_filesystem_types=nfs,nfs4,tmpfs
# The list of filesystem types which are considered local for monitoring
# purposes. These filesystems are combined with the other local
# filesystem types found in /proc/filesystems
local_filesystem_whitelist=ext2,ext3,ext4,xfs
# The largest size impala profile log bundle that this agent will serve
# to the CM server. If the CM server requests more than this amount, the
# bundle will be limited to this size. All instances of this limit being
# hit are logged to the agent log.
impala_profile_bundle_max_bytes=1073741824
# The largest size stacks log bundle that this agent will serve to the CM
# server. If the CM server requests more than this amount, the bundle
# will be limited to this size. All instances of this limit being hit are
# logged to the agent log.
stacks_log_bundle_max_bytes=1073741824
# The size to which the uncompressed portion of a stacks log can grow
# before it is rotated. The log will then be compressed during rotation.
stacks_log_max_uncompressed_file_size_bytes=5242880
# The orphan process directory staleness threshold. If a diretory is more
# stale than this amount of seconds, CM agent will remove it.
orphan_process_dir_staleness_threshold=5184000
# The orphan process directory refresh interval. The CM agent will check
# the staleness of the orphan processes config directory every this
# amount of seconds.
orphan_process_dir_refresh_interval=3600
# A knob to control the agent logging level. The options are listed as
# follows: 1) DEBUG (set the agent logging level to 'logging.DEBUG') 2)
# INFO (set the agent logging level to 'logging.INFO')
scm_debug=INFO
# The DNS resolution collecion interval in seconds. A java base test
# program will be executed with at most this frequency to collect java
# DNS resolution metrics. The test program is only executed if the
# associated health test, Host DNS Resolution, is enabled.
dns_resolution_collection_interval_seconds=60
# The maximum time to wait (in seconds) for the java test program to
# collect java DNS resolution metrics.
dns_resolution_collection_timeout_seconds=30
# The directory location in which the agent-wide kerberos credential
# cache will be created.
# agent_wide_credential_cache_location=/var/run/cloudera-scm-agent
[Security]
# Use TLS and certificate validation when connecting to the CM server.
use_tls=0
# The maximum allowed depth of the certificate chain returned by the
# peer. The default value of 9 matches the default specified in openssl's
# SSL_CTX_set_verify.
max_cert_depth=9
# A file of CA certificates in PEM format. The file can contain several
# CA certificates identified by
#
# -----BEGIN CERTIFICATE----- ... (CA certificate in base64 encoding) ...
# -----END CERTIFICATE-----
#
# sequences. Before, between, and after the certificates text is allowed
# which can be used e.g. for descriptions of the certificates.
#
# The file is loaded once, the first time an HTTPS connection is
# attempted. A restart of the agent is required to pick up changes to the
# file.
#
# Note that if neither verify_cert_file or verify_cert_dir is set,
# certificate verification will not be performed. verify_cert_file=
# Directory containing CA certificates in PEM format. The files each
# contain one CA certificate. The files are looked up by the CA subject
# name hash value, which must hence be available. If more than one CA
# certificate with the same name hash value exist, the extension must be
# different (e.g. 9d66eef0.0, 9d66eef0.1 etc). The search is performed in
# the ordering of the extension number, regardless of other properties of
# the certificates. Use the c_rehash utility to create the necessary
# links.
#
# The certificates in the directory are only looked up when required,
# e.g. when building the certificate chain or when actually performing
# the verification of a peer certificate. The contents of the directory
# can thus be changed without an agent restart.
#
# When looking up CA certificates, the verify_cert_file is first
# searched, then those in the directory. Certificate matching is done
# based on the subject name, the key identifier (if present), and the
# serial number as taken from the certificate to be verified. If these
# data do not match, the next certificate will be tried. If a first
# certificate matching the parameters is found, the verification process
# will be performed; no other certificates for the same parameters will
# be searched in case of failure.
#
# Note that if neither verify_cert_file or verify_cert_dir is set,
# certificate verification will not be performed. verify_cert_dir= PEM
# file containing client private key. client_key_file= A command to run
# which returns the client private key password on stdout
# client_keypw_cmd= If client_keypw_cmd isn't specified, instead a text
# file containing the client private key password can be used.
# client_keypw_file= PEM file containing client certificate.
# client_cert_file=
## Location of Hadoop files. These are the CDH locations when installed
## by packages. Unused when CDH is installed by parcels.
[Hadoop]
#cdh_crunch_home=/usr/lib/crunch cdh_flume_home=/usr/lib/flume-ng
#cdh_hadoop_bin=/usr/bin/hadoop cdh_hadoop_home=/usr/lib/hadoop
#cdh_hbase_home=/usr/lib/hbase cdh_hbase_indexer_home=/usr/lib/hbase-solr
#cdh_hcat_home=/usr/lib/hive-hcatalog cdh_hdfs_home=/usr/lib/hadoop-hdfs
#cdh_hive_home=/usr/lib/hive cdh_httpfs_home=/usr/lib/hadoop-httpfs
#cdh_hue_home=/usr/share/hue cdh_hue_plugins_home=/usr/lib/hadoop
#cdh_impala_home=/usr/lib/impala cdh_llama_home=/usr/lib/llama
#cdh_mr1_home=/usr/lib/hadoop-0.20-mapreduce
#cdh_mr2_home=/usr/lib/hadoop-mapreduce cdh_oozie_home=/usr/lib/oozie
#cdh_parquet_home=/usr/lib/parquet cdh_pig_home=/usr/lib/pig
#cdh_solr_home=/usr/lib/solr cdh_spark_home=/usr/lib/spark
#cdh_sqoop_home=/usr/lib/sqoop cdh_sqoop2_home=/usr/lib/sqoop2
#cdh_yarn_home=/usr/lib/hadoop-yarn cdh_zookeeper_home=/usr/lib/zookeeper
#hive_default_xml=/etc/hive/conf.dist/hive-default.xml
#webhcat_default_xml=/etc/hive-webhcat/conf.dist/webhcat-default.xml
#jsvc_home=/usr/libexec/bigtop-utils tomcat_home=/usr/lib/bigtop-tomcat
#oracle_home=/usr/share/oracle/instantclient
## Location of Cloudera Management Services files.
[Cloudera]
#mgmt_home=/usr/share/cmf
## Location of JDBC Drivers.
[JDBC]
#cloudera_mysql_connector_jar=/usr/share/java/mysql-connector-java.jar
#cloudera_oracle_connector_jar=/usr/share/java/oracle-connector-java.jar
#By default, postgres jar is found dynamically in $MGMT_HOME/lib
#cloudera_postgresql_jdbc_jar=

 

3 REPLIES 3

New Contributor

I am trying to solve this but i am unable to get any lead 

Master Collaborator
I looks like another instance of cloudera agent is running and keeping the port 9000 occupied. At least that is what I understand from your netstat

Expert Contributor

Hi @abhay0129

 

You need to kill the process that is on port 9000 or change de port of the service.

 

 

Regards,

Manu.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.