Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

BasicScmProxy: Exception while getting fetch configDefaults hash

avatar
Contributor

2016-01-12 13:24:58,093 INFO com.cloudera.cmon.firehose.CMONConfiguration: Config: file:/var/run/cloudera-scm-agent/process/788-cloudera-mgmt-ACTIVITYMONITOR/cmon.conf
2016-01-12 13:25:00,933 INFO com.cloudera.cmf.cdhclient.util.CDHUrlClassLoader: Detected that this program is running in a JAVA 1.7.0_67 JVM. CDH5 jars will be loaded from:lib/cdh5
2016-01-12 13:25:00,935 INFO com.cloudera.enterprise.ssl.SSLFactory: Using default java truststore for verification of server certificates in HTTPS communication.
2016-01-12 13:25:01,435 WARN com.cloudera.cmf.BasicScmProxy: Exception while getting fetch configDefaults hash: none
java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

 

 

 

 

I am seeing this error after upgrading from CDH 5.3.3 to CDH 5.5.1.

 

I was able to start the cluster no problem. It seems Cloudera Manager is not able to query any services to capture service state, logs, etc.

 

Any ideas?

1 ACCEPTED SOLUTION

avatar
Contributor

Not sure what happened. Restart of entire cluster failed to fix the issue, however by restarting individual cluster management services, the problem resolved itself and it is currently looking good.

View solution in original post

1 REPLY 1

avatar
Contributor

Not sure what happened. Restart of entire cluster failed to fix the issue, however by restarting individual cluster management services, the problem resolved itself and it is currently looking good.