Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26227 | 03-03-2020 08:12 AM | |
| 16360 | 02-28-2020 10:43 AM | |
| 4702 | 12-16-2019 12:59 PM | |
| 4468 | 11-12-2019 03:28 PM | |
| 6644 | 11-01-2019 09:01 AM |
07-29-2019
08:29 AM
Hey Ben, Thanks for getting back to me and sorry about the delay in responding. I ended up testing this Friday and everything went smoothly with the CSD. I did run into some issues with a custom parcel and CM making the wrong *.sha file upon downloading but putting the parcel in /opt/cloudera/parcel-repo helped clear that up. I may end up making that a different forum post as it seems a bit buggy. Regards, Dan
... View more
07-26-2019
02:18 PM
> Is there any option to find empty directory using HDFS command Directly? You can get a list/find empty directories using the 'org.apache.solr.hadoop.HdfsFindTool'. And using the hdfs tool to check/test if _a_ directory is empty, you can use -du or -test; please see the FileSystemShell [0] test
Usage: hadoop fs -test -[defsz] URI
Options:
-d: f the path is a directory, return 0.
-e: if the path exists, return 0.
-f: if the path is a file, return 0.
-s: if the path is not empty, return 0.
-r: if the path exists and read permission is granted, return 0.
-w: if the path exists and write permission is granted, return 0.
-z: if the file is zero length, return 0.
Example:
hadoop fs -test -e filename du
Usage: hadoop fs -du [-s] [-h] [-x] URI [URI ...]
Displays sizes of files and directories contained in the given directory or the length of a file in case its just a file.
Options:
The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files. Without the -s option, calculation is done by going 1-level deep from the given path.
The -h option will format file sizes in a “human-readable” fashion (e.g 64.0m instead of 67108864)
The -x option will exclude snapshots from the result calculation. Without the -x option (default), the result is always calculated from all INodes, including all snapshots under the given path.
The du returns three columns with the following format:
size disk_space_consumed_with_all_replicas full_path_name
Example:
hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit Code: Returns 0 on success and -1 on error. [0] https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html
... View more
07-26-2019
09:00 AM
Hi @dennisli , I hope You are doing well. Please confirm one time in below fields value what i commented:- <property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value> //this is metadata db
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value> //this is metadata db username
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mypassword</value> //this is metadatapassword
</property> Thanks HadoopHelp
... View more
07-25-2019
10:00 AM
is a bit late but i post the solution that worked for me. the problem was the hostnames, impala with kerberos wants the hostnames in lowercase.
... View more
07-24-2019
05:24 AM
Hi @bgooley, I have downgraded Mysql version10.2 to 10.0. Now it is working fine:) Thank you so much for your support. Thanks, Vijaya
... View more
07-23-2019
10:37 AM
@Roroka, I am very glad to hear that you got it working. The reason I asked about Cloudera Express 6.x is that it does not have support for Auto-TLS. It is certainly possible to set up TLS manually (our docs cover that). If you have any questions, this community is a good place to ask. Cheers.
... View more
07-23-2019
09:36 AM
@wsmolak, This thread covers a different issue that is quite old. Let's continue the conversation in the other thread you opened if that is ok: https://community.cloudera.com/t5/Cloudera-Manager-Installation/Problem-with-path-to-parcels/m-p/93062#M17196?eid=1&aid=1 Please explain what problem you are facing if there is one.
... View more
07-23-2019
03:51 AM
Thanks. It worked.
... View more
07-22-2019
02:54 PM
@mmmunafo, I guess your workaround should be OK. The only other two option I could see would be to wrap the pam.authenticate() call with an unset and set of KRB5CCNAME. Assuming authentication takes milliseconds, it would be unlikely that Hue is attempting to retrieve cache information at that moment, but I don't know that it is any better than what you are up to. for instance, in desktop/core/src/desktop/auth/backend.py wrap: if pam.authenticate(username, password, desktop.conf.AUTH.PAM_SERVICE.get()): With del os.environ['KRB5CCNAME'] and then after auth: os.environ['KRB5CCNAME'] = desktop.conf.KERBEROS.CCACHE_PATH.get() NOTE: we would need to import os in backend.py to do that. So possibly, something like this would work: class PamBackend(DesktopBackendBase):
"""
Authentication backend that uses PAM to authenticate logins. The first user to
login will become the superuser.
"""
@metrics.pam_authentication_time
def authenticate(self, request=None, username=None, password=None):
username = force_username_case(username)
del os.environ['KRB5CCNAME']
if pam.authenticate(username, password, desktop.conf.AUTH.PAM_SERVICE.get()):
os.environ['KRB5CCNAME'] = desktop.conf.KERBEROS.CCACHE_PATH.get()
is_super = False
if User.objects.count() == 0:
is_super = True
try:
if desktop.conf.AUTH.IGNORE_USERNAME_CASE.get():
user = User.objects.get(username__iexact=username)
else:
user = User.objects.get(username=username)
except User.DoesNotExist:
user = find_or_create_user(username, None)
if user is not None and user.is_active:
profile = get_profile(user)
profile.creation_method = UserProfile.CreationMethod.EXTERNAL.name
profile.save()
user.is_superuser = is_super
ensure_has_a_group(user)
user.save()
user = rewrite_user(user)
return user
os.environ['KRB5CCNAME'] = desktop.conf.KERBEROS.CCACHE_PATH.get()
return None
@classmethod
def manages_passwords_externally(cls):
return True Might not be worth it, though
... View more
07-18-2019
06:31 PM
@Debashish, yup, stderr.log looks good. Please check your Region Server log file on that host (usually it is in /var/log/hbase with the name REGIONSERVER in it). I am guessing you will see an attempt to start up and then a failure of sorts.
... View more