Member since
01-09-2019
29
Posts
3
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
671 | 06-03-2020 12:34 AM | |
518 | 06-03-2020 12:24 AM | |
391 | 05-28-2020 02:37 AM |
01-07-2021
07:40 AM
Dear @janvanbesien , Were you able to migrate to a supported version of JDK 11, please? Kind regards, Julius
... View more
01-05-2021
05:36 AM
Hello Boenu, Were you able to resolve this issue, please? We upgraded our NiFi deployment today and are facing the same issue (tested in Firefox and Chrome). Kind regards, Julius EDIT: found the related JIRA issues, seems like there is no fix for it, just a workaround 😞 https://issues.apache.org/jira/browse/NIFI-7849 https://issues.apache.org/jira/browse/NIFI-7870
... View more
11-26-2020
02:49 AM
Hi SherKhan, Based on the screenshot, you have two conditions configured for the warning trigger. The first one will trigger if the free space falls below 5GB, the second one will trigger once the disk utilization is higher than 50%. The disk belonging to master.ndctech.net has 55.53% of it's capacity used, and so the warning is triggered. If you consider this to be a false alarm, you should raise the warning threshold from 50% to 70% perhaps. Kind regards, Gyuszi
... View more
09-30-2020
03:51 AM
@rbloughThank you for the continued support. 2) The command is being run as the hdfs user. 1) The detailed output showed that there are 603,723 blocks in total. Looking at the HDFS UI, the Datanodes report having 586,426 blocks each. 3) hdfs fsck / -openforwrite says that there are 506,549 blocks in total. The discrepancy in block count seems to be there still. Below are the summaries of the different fsck outputs. hdfs fsck / -files -blocks -locations -includeSnapshots Status: HEALTHY Number of data-nodes: 3 Number of racks: 1 Total dirs: 64389 Total symlinks: 0 Replicated Blocks: Total size: 330079817503 B (Total open files size: 235302 B) Total files: 625308 (Files currently being written: 129) Total blocks (validated): 603723 (avg. block size 546740 B) (Total open file blocks (not validated): 122) Minimally replicated blocks: 603723 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 3 Average block replication: 3.0 Missing blocks: 0 Corrupt blocks: 0 Missing replicas: 0 (0.0 %) Blocks queued for replication: 0 Erasure Coded Block Groups: Total size: 0 B Total files: 0 Total block groups (validated): 0 Minimally erasure-coded block groups: 0 Over-erasure-coded block groups: 0 Under-erasure-coded block groups: 0 Unsatisfactory placement block groups: 0 Average block group size: 0.0 Missing block groups: 0 Corrupt block groups: 0 Missing internal blocks: 0 Blocks queued for replication: 0 FSCK ended at Wed Sep 30 12:23:06 CEST 2020 in 23305 milliseconds hdfs fsck / -openforwrite Status: HEALTHY Number of data-nodes: 3 Number of racks: 1 Total dirs: 63922 Total symlinks: 0 Replicated Blocks: Total size: 329765860325 B Total files: 528144 Total blocks (validated): 506549 (avg. block size 651004 B) Minimally replicated blocks: 506427 (99.975914 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 3 Average block replication: 2.9992774 Missing blocks: 0 Corrupt blocks: 0 Missing replicas: 0 (0.0 %) Blocks queued for replication: 0 Erasure Coded Block Groups: Total size: 0 B Total files: 0 Total block groups (validated): 0 Minimally erasure-coded block groups: 0 Over-erasure-coded block groups: 0 Under-erasure-coded block groups: 0 Unsatisfactory placement block groups: 0 Average block group size: 0.0 Missing block groups: 0 Corrupt block groups: 0 Missing internal blocks: 0 Blocks queued for replication: 0 FSCK ended at Wed Sep 30 12:28:06 CEST 2020 in 11227 milliseconds
... View more
09-22-2020
11:23 PM
Hi @Madhur , Thank you for your prompt response. Unfortunately I do not have a subscription with Cloudera, I am unable to access the Knowledge Base. So far we have managed to get by with the free version of CDH 6.3 and the help of the community on and off these forums 🙂 Kind regards, Gyuszi Kovacs
... View more
09-22-2020
11:17 PM
@rblough- thank you very much for your prompt reply. 1) I got the block numbers from the HDFS status page in Cloudera Manager. Based on your question I checked the numbers on the NameNode UI. For the DEV environment all three DataNodes are online, showing 1,613,019 blocks on each (CM shows 1,613,104). For the PROD environment the NameNodeUI shows 477,464 blocks on each of the three DataNodes. 2) Yes, all of the DataNodes are showing as online. dfsadmin -report confirms this, so does the NameNodeUI and CM. Coincidentally, the report did not include the total block count, just the number of missing or under replicated blocks - everything sits at zero. 3) The replication factor is set to 3 in both environments. Kind regards, Gyuszi Kovacs
... View more
09-22-2020
07:01 AM
Greetings, I would like to clear up my understanding of how the block count is measured for the cluster. A bit of background information - we started receiving high block count warnings on Cloudera Manager (6.3), this lead to some investigating and cleanup. Currently I am trying to lower the block count on our DEV environment, but I am a bit confused. On Cloudera Manager, when I navigate to the HDFS service, and look at the Health Tests I see that there are "...1,608,301 total blocks in the cluster." However, when I run: sudo -u hdfs hdfs fsck / -files -blocks -locations the summary at the end states that: Replicated Blocks: Total size: 53194573887 B (Total open files size: 1557592 B) Total files: 569244 (Files currently being written: 202) Total blocks (validated): 553524 (avg. block size 96101 B) (Total open file blocks (not validated): 193) Minimally replicated blocks: 553524 (100.0 %) Now this is fine, from what I understand, the fsck command shows the block count without the replication factor of 3, so the numbers add up roughly. However, when I perform the same comparison on our PROD environment, I get the following. On Cloudera Manager I see that there are "... 449,966 total blocks in the cluster.", yet the fsck command returns: Replicated Blocks: Total size: 317645298827 B (Total open files size: 2529389 B) Total files: 389375 (Files currently being written: 142) Total blocks (validated): 368223 (avg. block size 862643 B) (Total open file blocks (not validated): 130) Minimally replicated blocks: 368223 (100.0 %) Could someone please explain the discrepancy between the numbers in this case? Thank you, kind regards, Gyuszi
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
08-11-2020
08:31 AM
1 Kudo
More than a year later and there is still no open source version available, from what I can tell. Am I missing something? :S @Cloudera - know that Covid shook the world, but an updated roadmap would be appreciated. Cheers!
... View more
06-03-2020
12:34 AM
Hi @Maria_pl , Does the send-mail processor produce any errors when the message fails to send? Also, please check the NiFi logs (nifi-app.log should be the one that might give you a hint as to what is happening in the background, but while there you might want to check the rest of the NiFi logs). I see that you are using the latest version of NiFi. We are currently on version 1.10.0 and PutEmail works fine. Kind regards, Julius
... View more
06-03-2020
12:24 AM
Hello, This topic is described at length in the official documentation - https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_ig_ports.html Kind regards, Julius
... View more
06-02-2020
04:35 AM
Hello there, What happens when you try a manual kinit on host2 with explicitly specifying the principal and the realm, ie: kinit -kt keytab.file -p user/HDY@HDY Also, could you share your krb5.conf files from both hosts, as well as the list of keys stored in the keytab you exported (klist -kt keytab.file)? Without this information it's a bit hard to analyze this issue. Kind regards, Julius
... View more
06-02-2020
03:22 AM
1 Kudo
Hi @palaaniappan , According to your screenshot I suspect there are invisible characters (new-line, tab, space, etc.) present in the name of your folder - see how there is a new line present after your IoT folders, but no new-lines for other folders. Please check your NiFi processors and make sure the file paths for creating the folders are correct. Sometimes when copy-pasting between different windows you might inadvertently paste some invisible characters as well. Kind regards, Julius
... View more
05-28-2020
02:37 AM
1 Kudo
Hello, To access the support portal you need to have a valid subscription with Cloudera, at least as far as I know. If you do have a valid subscription then I think it would be better to contact Cloudera directly or through your sales representative. Kind regards, Julius
... View more
05-06-2020
07:38 AM
Dear @Bender , Thank you very much for your prompt response. Enabling SPNEGO for HDFS did indeed solve our issue with YARN. The UIs are now accessible again (with a valid Kerberos ticket). Üdvözlettel 🙂 Gyuszi
... View more
05-05-2020
03:17 AM
Greetings,
After enabling "Kerberos Authentication for HTTP Web-Consoles" for YARN the Resource Manager WebUI and the HistoryServer Web UI become inaccessible with a valid Kerberos ticket (without a ticket the UI correctly gives the "Authentication required" HTTP 401 error message).
Navigating to either of the interfaces returns the following error:
HTTP ERROR 500
Problem accessing /jobhistory. Reason:
Server Error
Caused by:
java.lang.IllegalArgumentException: Empty key
at javax.crypto.spec.SecretKeySpec.<init>(SecretKeySpec.java:96)
at org.apache.hadoop.security.authentication.util.Signer.computeSignature(Signer.java:93)
at org.apache.hadoop.security.authentication.util.Signer.sign(Signer.java:59)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:587)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1553)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:539)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:259)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Meanwhile, in Cloudera Manager the YARN health checks report bad status for every component. In the YARN logs (hadoop-cmf-yarn-RESOURCEMANAGER) the following WARN messages appear:
2020-05-05 11:56:42,835 WARN org.eclipse.jetty.servlet.ServletHandler: /jmx
java.lang.IllegalArgumentException: Empty key
...
2020-05-05 11:57:56,461 WARN org.eclipse.jetty.servlet.ServletHandler: /ws/v1/cluster/info
java.lang.IllegalArgumentException: Empty key
at javax.crypto.spec.SecretKeySpec.<init>(SecretKeySpec.java:96)
at org.apache.hadoop.security.authentication.util.Signer.computeSignature(Signer.java:93)
at org.apache.hadoop.security.authentication.util.Signer.sign(Signer.java:59)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:587)
at org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1553)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:536)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:259)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
The cluster is Kerberized, TLS/SSL is enabled. As a side note, SPNEGO is enabled for the HBase WebUI and that works without issues.
Looking through the documentation and various online forums I only found hints that suggested adding the Service Monitor Kerberos Principal to hdfs-site.xml, but obviously my issue is with Yarn, not HDFS.
Thank you for your help in advance!
Kind regards,
Julius
... View more
Labels:
- Labels:
-
Apache YARN
-
Cloudera Manager
-
Kerberos
05-04-2020
12:23 AM
Hi @lwang , Thank you very much for your reply, it answered all the questions I had. Kind regards, Julius
... View more
04-17-2020
07:43 AM
Greetings,
I am looking for a way to enforce password complexity requirements, mandatory rotation of passwords and mitigation of brute force password cracking attacks on Cloudera Manager (CM) accounts.
I was researching user authentication options in the free version of CM and came to the conclusion that the only available option is Kerberos and SPNEGO, paired with Kerberos password policies.
In CM I enabled "Enable SPNEGO/Kerberos Authentication for the Admin Console and API", but I didn't notice anything different after restarting the service.
Are there other options (for the free version of CM 6.3) that I am missing? What are the options in the paid version?
Thank you, Kind regards,
Julius
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
-
Security
01-30-2020
08:21 AM
Bumping this thread as this issue seems to be still unresolved. Today I noticed that the documentation has been updated recently and for CDH 6.3 there are some new steps as far as Hue and HBase are concerned -https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/admin_hue_enable_apps.html The article also references this video - https://www.youtube.com/watch?v=FGDTGmaxvsM&feature=emb_title - which was uploaded to YouTube only recently, but shows a rather old version of both CDH and Hue 😕
... View more
01-20-2020
08:39 AM
Hi @GangWar, The SOLR principal is not present in CM.
... View more
01-08-2020
11:23 PM
*bump* 🙂 any ideas?
... View more
01-07-2020
01:02 AM
Greetings,
Yesterday while digging through the logs I found out that cloudera-scm-agent.log has frequent entries like this:
07/Jan/2020 09:59:00 +0000] 22803 CredentialManager kt_renewer WARNING Couldn't kinit as 'solr/<hostname>' using /var/run/cloudera-scm-agent/process/2580-solr-SOLR_SERVER/solr.keytab --- kinit: Key table file '/var/run/cloudera-scm-agent/process/2580-solr-SOLR_SERVER/solr.keytab' not found while getting initial credentials
This is unexpected as the Solr service is no longer present, I removed it manually (a couple months ago).
How could I tell the Credential Manager service to stop trying to renew the kerberos ticket for Solr?
Thank you, kind regards,
Julius
Edit: after manually removing the solr principal the error changes to:
[07/Jan/2020 10:29:02 +0000] 22803 CredentialManager kt_renewer WARNING Couldn't kinit as 'solr/<hostname>' using /var/run/cloudera-scm-agent/process/2580-solr-SOLR_SERVER/solr.keytab --- kinit: Client 'solr/<hostname>' not found in Kerberos database while getting initial credentials
... View more
Labels:
- Labels:
-
Apache Solr
-
Cloudera Manager
11-26-2019
07:49 AM
I'm terribly sorry for the necro-bump. The issue I'm experiencing seems to be the same one the OP was facing. We're running CDH 6.3, and I encountered a problem when trying to install the Kudu Python client. I created the symlinks as recommended by mpercy, but I'm still unable to install the kudu-python client. Our cluster does not have direct access to the internet, so when possible we use an offline install. The official documentation says that the Kudu C++ client libraries and headers are needed for the Kudu Python client. On Oracle Linux 7 trying to install devtoolset-3-toolchain ends in a failure, as a number of dependencies are missing: Error: Package: devtoolset-3-gdb-7.8.2-38.el6.x86_64 (rhscl-devtoolset-3-epel-6-x86_64) Requires: libpython2.6.so.1.0()(64bit) Error: Package: devtoolset-3-gcc-gfortran-4.9.2-6.el6.x86_64 (rhscl-devtoolset-3-epel-6-x86_64) Requires: libmpfr.so.1()(64bit) Error: Package: devtoolset-3-gcc-c++-4.9.2-6.el6.x86_64 (rhscl-devtoolset-3-epel-6-x86_64) Requires: libgmp.so.3()(64bit) Error: Package: devtoolset-3-gcc-4.9.2-6.el6.x86_64 (rhscl-devtoolset-3-epel-6-x86_64) Requires: libgmp.so.3()(64bit) Error: Package: devtoolset-3-gcc-4.9.2-6.el6.x86_64 (rhscl-devtoolset-3-epel-6-x86_64) Requires: libmpfr.so.1()(64bit) Error: Package: devtoolset-3-gcc-gfortran-4.9.2-6.el6.x86_64 (rhscl-devtoolset-3-epel-6-x86_64) Requires: libgmp.so.3()(64bit) Error: Package: devtoolset-3-gcc-c++-4.9.2-6.el6.x86_64 (rhscl-devtoolset-3-epel-6-x86_64) Requires: libmpfr.so.1()(64bit) Disregarding that, running pip install --no-index --find-links file:///data0/home/jkovacs/kudu-python-1.10.0.tar.gz results in the following errors: ERROR: Command errored out with exit status 1:
command: /usr/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rK2Qbk/kudu-python/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rK2Qbk/kudu-python/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-rK2Qbk/kudu-python/pip-egg-info
cwd: /tmp/pip-install-rK2Qbk/kudu-python/
Complete output (43 lines):
Building from system prefix /usr/local
/usr/lib64/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tmp/pip-install-rK2Qbk/kudu-python/kudu/client.pxd
tree = Parsing.p_module(s, pxd, full_module_name)
/usr/lib64/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tmp/pip-install-rK2Qbk/kudu-python/kudu/schema.pxd
tree = Parsing.p_module(s, pxd, full_module_name)
/usr/lib64/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tmp/pip-install-rK2Qbk/kudu-python/kudu/errors.pxd
tree = Parsing.p_module(s, pxd, full_module_name)
Compiling kudu/client.pyx because it depends on kudu/config.pxi.
Compiling kudu/errors.pyx because it depends on /usr/lib64/python2.7/site-packages/Cython/Includes/libcpp/string.pxd.
Compiling kudu/schema.pyx because it depends on kudu/config.pxi.
[1/3] Cythonizing kudu/client.pyx
[2/3] Cythonizing kudu/schema.pyx
[3/3] Cythonizing kudu/errors.pyx
WARNING: The wheel package is not available.
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at <a href="<a href="https://pip.pypa.io/en/latest/development/release-process/#python-2-support" target="_blank">https://pip.pypa.io/en/latest/development/release-process/#python-2-support</a>" target="_blank"><a href="https://pip.pypa.io/en/latest/development/release-process/#python-2-support</a" target="_blank">https://pip.pypa.io/en/latest/development/release-process/#python-2-support</a</a>>
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fe8bbca3910>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/pytest-runner/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fe8bbca3110>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/pytest-runner/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fe8bbca3090>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/pytest-runner/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fe8bbca9e50>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/pytest-runner/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7fe8bbca9e90>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/pytest-runner/
ERROR: Could not find a version that satisfies the requirement pytest-runner (from versions: none)
ERROR: No matching distribution found for pytest-runner
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-rK2Qbk/kudu-python/setup.py", line 216, in <module>
test_suite="kudu.tests"
File "/usr/lib/python2.7/site-packages/setuptools/__init__.py", line 144, in setup
_install_setup_requires(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/__init__.py", line 139, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 721, in fetch_build_eggs
replace_conflicting=True,
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 782, in resolve
replace_conflicting=replace_conflicting
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1065, in best_match
return self.obtain(req, installer)
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1077, in obtain
return installer(requirement)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 777, in fetch_build_egg
return fetch_build_egg(self, req)
File "/usr/lib/python2.7/site-packages/setuptools/installer.py", line 121, in fetch_build_egg
raise DistutilsError(str(e))
distutils.errors.DistutilsError: Command '['/usr/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmp8eqkO2', '--quiet', 'pytest-runner']' returned non-zero exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. I'd like to ask for some guidance, what to try next? Thank you!
... View more
09-09-2019
03:03 AM
Corrected: export HADOOP_CONF_DIR=/etc/spark/conf/yarn-conf/*:/etc/hive/conf:$HADOOP_CONF_DIR
... View more
09-03-2019
08:02 AM
We encountered the same problem after upgrading to CDH 6.3 from 5.15. The steps outlined by Bender helped us resolve the issue, with the following small differences: We modified the advanced configuration of the Spark 2 service: Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh The following line was added: export HADOOP_CONF_DIR=/etc/spark/conf/yarn-conf/*:/etc/hive/conf No cluster or service restart was necessary, simply re-deploying the client configs did the trick.
... View more
07-25-2019
03:07 AM
Hi Ben, Thank you for your prompt response! The roles are as follows: HBase REST server, HBase Thrift Server, Master, 3x RegionServer. Here are the latest logs I obtained this morning after: stopping (CM - HBase Actions - Stop, I get asked "Are you sure you want to run the HBase Graceful Shutdown command on the service HBase?", I click Stop) and trying to start the HBase service (CM - Hbase Actions - Start, here the prompt asks "Are you sure you want to run the Start command on the service HBase?", I click Start, the process then continues with "Starting 6 roles on service) ... and then it all started without issues. I'm baffled, but also happy that it's working correctly now. I did 2 stop&starts and 1 restart, the issue seems to be gone. Yesterday I was "weeding out" a Java 1.7 installation that popped up on our DEV cluster and it was causing issues with Kerberos. I think it also interfered with our deployment of HBase. Sorry for the false alarm, please feel free to remove this thread. Thank you for your time! Kind regards, Julius
... View more
07-24-2019
04:58 AM
Greetings, We're encountering a rather strange issue. We are running HBase as part of a healthy CDH 5.15 cluster. Because of configuration changes we have to restart HBase every now and then. Every time, no matter what, HBase fails to start after a graceful shutdown. The error is always the same, kinit fails. I have to manually regenerate the HBase keytabs from cloudera manager, afterwards HBase starts right away. I looked around online but failed to find any mention of such behavior. Other services handle restarts without hiccups, no need to regenerate other keytabs. Please let me know if additional details are needed to further debug this issue. Thank you for your help in advance! Kind regards, Julius
... View more
Labels:
- Labels:
-
CDH Manual Installation
-
HBase
-
Kerberos
08-08-2018
11:14 AM
Thank you for this well written, comprehensive tutorial, it's very useful! Cheers!
... View more