Member since
10-04-2016
69
Posts
6
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5238 | 03-23-2017 08:41 AM | |
2626 | 01-26-2017 07:22 PM | |
1879 | 12-23-2016 12:07 PM | |
6591 | 12-21-2016 01:54 PM | |
1582 | 12-05-2016 06:37 AM |
02-04-2024
08:11 AM
Hi @zhuw.bigdata To locate Spark logs, follow these steps: Access the Spark UI: Open the Spark UI in your web browser. Identify Nodes: Navigate to the Executors tab to view information about the driver and executor nodes involved in the Spark application. Determine Log Directory: Within the Spark UI, find the Hadoop settings section and locate the value of the yarn.nodemanager.log-dirs property. This specifies the base directory for Spark logs on the cluster. Access Log Location: Using a terminal or SSH, log in to the relevant node (driver or executor) where the logs you need are located. Navigate to Application Log Directory: Within the yarn.nodemanager.log-dirs directory, access the subdirectory for the specific application using the pattern application_${appid}, where ${appid} is the unique application ID of the Spark job. Find Container Logs: Within the application directory, locate the individual container log directories named container_{$contid}, where ${contid} is the container ID. Review Log Files: Each container directory contains the following log files generated by that container: stderr: Standard error output stdin: Standard input (if applicable) syslog: System-level logs
... View more
01-16-2023
11:44 PM
@techfriend this can be resolved after modifiying the principle. WARNING: no policy specified for mapred/ip-172-31-46-169.us-west-2.compute.internal@HADM.RU; defaulting to no policy
add_principal: Principal or policy already exists while creating "mapred/ip-172-31-46-169.us-west-2.compute.internal@HADM.RU".
+ '[' 604800 -gt 0 ']'
++ kadmin -k -t /var/run/cloudera-scm-server/cmf5922922234613877041.keytab -p cloudera-scm/admin@HADM.RU -r HADM.RU -q 'getprinc -terse mapred/ip-172-31-46-169.us-west-2.compute.internal@HADM.RU'
++ tail -1
++ cut -f 12
+ RENEW_LIFETIME=0
+ '[' 0 -eq 0 ']'
+ echo 'Unable to set maxrenewlife'
+ exit 1 modprinc -maxrenewlife 90day +allow_renewable mapred/ip-172-31-46-169.us-west-2.compute.internal@HADM.RU
... View more
07-02-2020
11:57 PM
Try with the sql statement: select VALUE from scm.CONFIGS where ATTR="kdc_admin_user"; scm is the CM database in the example.
... View more
03-03-2020
09:30 PM
1 Kudo
Could you run the administrator cmd not a normal one? From your error, I think you have not enough privilege to run these commands.
... View more
10-15-2018
01:30 PM
Did anyone succussfully solved this problem. I am installing a new cloudera cluster using 5.15.1 version and we want to restrict the firewall rules in a range for the nodes to communicate. However, when i run jobs it starts using ports that are not open and hence, fails to run the job.
... View more
06-19-2018
09:30 AM
Has this situation improved over the past year? Is there any public information on how to secure the back-end database connections?
... View more
04-24-2017
12:11 AM
# cp /etc/my.cnf /etc/my.cnf.default # vim /etc/my.cnf -------------------------------- [mysqld] # Disable password validaion plugin validate-password=off -------------------------------- # systemctl restart mysqld
... View more
04-06-2017
05:27 AM
Hi, What about version of Schema Registry? Which version should be used with CDH 5.8? Thanks in advance..
... View more
03-30-2017
09:41 AM
Switched to JDK 1.7 and got the same issue. It seems that JDK can't pick up from the cache. $ export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true $HADOOP_OPTS" $ export HADOOP_ROOT_LOGGER=TRACE,console; $ export HADOOP_JAAS_DEBUG=true $ hdfs dfs -ls 2> /tmp/hdfsls.txt Java config name: null Native config name: /etc/krb5.conf Loaded from native config [UnixLoginModule]: succeeded importing info: uid = 1000 gid = 1000 supp gid = 4 supp gid = 10 supp gid = 190 supp gid = 1000 Debug is true storeKey false useTicketCache true useKeyTab false doNotPrompt true ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false Acquire TGT from Cache >>>KinitOptions cache name is /tmp/krb5cc_1000 Principal is null null credentials from Ticket Cache [Krb5LoginModule] authentication failed Unable to obtain Princpal Name for authentication [UnixLoginModule]: added UnixPrincipal, UnixNumericUserPrincipal, UnixNumericGroupPrincipal(s), to Subject
... View more
03-23-2017
08:41 AM
1 Kudo
Resolved by using `%`.* in the grant statement which removes mysql database. AWS RDS will not let us touch this database on PAAS offering.
... View more