Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2309 | 08-28-2018 02:00 AM | |
2160 | 07-31-2018 06:55 AM | |
5070 | 07-26-2018 03:02 AM | |
2433 | 07-19-2018 02:30 AM | |
5863 | 05-21-2018 03:42 AM |
11-28-2016
09:24 PM
Note: I don't find the keytab file in the below path. Is it causing the trouble? /var/run/cloudera-scm-server/cmf4310122296840901236.keytab
... View more
11-28-2016
09:22 PM
Hi Our test environment has RedHat 6.x, Kerberos instllation went well but getting the following error when enable Kerberos via CM wizard All of our services were green before enable the kerberos but now all the services are down with following error "Role is missing Kerberos keytab. Please run the Generate Missing Credentials command on the Kerberos Credentials tab of the Administration -> Security page" I tried to generate missing credentials in security page. but it is failed with below error message. Pls help me to understand how to proceed further... /usr/share/cmf/bin/gen_credentials.sh failed with exit code 1 and output of << + export PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/sbin:/usr/sbin:/bin:/usr/bin + PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/sbin:/usr/sbin:/bin:/usr/bin + CMF_REALM=AWS.COM + KEYTAB_OUT=/var/run/cloudera-scm-server/cmf4310122296840901236.keytab + PRINC=oozie/<hostname>@AWS.COM + MAX_RENEW_LIFE=432000 + KADMIN='kadmin -k -t /var/run/cloudera-scm-server/cmf8961661390083798972.keytab -p root@AWS.COM -r AWS.COM' + RENEW_ARG= + '[' 432000 -gt 0 ']' + RENEW_ARG='-maxrenewlife "432000 sec"' + '[' -z /var/run/cloudera-scm-server/krb51519941863236958532.conf ']' + echo 'Using custom config path '\''/var/run/cloudera-scm-server/krb51519941863236958532.conf'\'', contents below:' + cat /var/run/cloudera-scm-server/krb51519941863236958532.conf + kadmin -k -t /var/run/cloudera-scm-server/cmf8961661390083798972.keytab -p root@AWS.COM -r AWS.COM -q 'addprinc -maxrenewlife "432000 sec" -randkey oozie/<hostname>@AWS.COM' WARNING: no policy specified for oozie/<hostname>@AWS.COM; defaulting to no policy add_principal: Operation requires ``add'' privilege while creating "oozie/<hostname>@AWS.COM". + '[' 432000 -gt 0 ']' ++ kadmin -k -t /var/run/cloudera-scm-server/cmf8961661390083798972.keytab -p root@AWS.COM -r AWS.COM -q 'getprinc -terse oozie/<hostname>@AWS.COM' ++ tail -1 ++ cut -f 12 get_principal: Operation requires ``get'' privilege while retrieving "oozie/<hostname>@AWS.COM". + RENEW_LIFETIME='Authenticating as principal root@AWS.COM with keytab /var/run/cloudera-scm-server/cmf8961661390083798972.keytab.' + '[' Authenticating as principal root@AWS.COM with keytab /var/run/cloudera-scm-server/cmf8961661390083798972.keytab. -eq 0 ']' /usr/share/cmf/bin/gen_credentials.sh: line 35: [: too many arguments + kadmin -k -t /var/run/cloudera-scm-server/cmf8961661390083798972.keytab -p root@AWS.COM -r AWS.COM -q 'xst -k /var/run/cloudera-scm-server/cmf4310122296840901236.keytab oozie/<hostname>@AWS.COM' kadmin: Operation requires ``change-password'' privilege while changing oozie/<hostname>@AWS.COM's key + chmod 600 /var/run/cloudera-scm-server/cmf4310122296840901236.keytab chmod: cannot access `/var/run/cloudera-scm-server/cmf4310122296840901236.keytab': No such file or directory >> kadmin.local kadmin.local: listprincs
cloudera-scm/admin@AWS.COM
cloudera-scm/<Master_Domain>@AWS.COM
cloudera-scm/<hostname>@AWS.COM
host/<Clienthost1_name>@AWS.COM
host/<Clienthost2_name>@AWS.COM
kadmin/admin@AWS.COM
kadmin/changepw@AWS.COM
kadmin/<Master_hostname>@AWS.COM
krbtgt/AWS.COM@AWS.COM
kumar@AWS.COM
oozie/<Master_Domain>@AWS.COM
oozie/<Master_hostname>@AWS.COM
root/admin@AWS.COM
root@AWS.COM Note: all the services are belongs to master host Thanks Kumar
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
11-17-2016
09:26 PM
I am giving one more option to you... Note: You can hard code $ with actual path/file/table Step1: take a backup & delete the data & table from the DB where you have issue Step2: hive -S -e "export table $schema_file1.$tbl_file1 to '$HDFS_DATA_PATH/$tbl_file1';" Note: Execute from HDFS and export the working db.table Step2: # --It contains both data and metadata. Step3: hive -S -e "import table $schema_file1.$tbl_file1 from '$HDFS_DATA_PATH/$tbl_file1';" Note: First import will through an error as table doesn't exist in target DB, but automatically create a table. Import to the target DB where you need the data Step4: hive -S -e "import table $schema_file1.$tbl_file1 from '$HDFS_DATA_PATH/$tbl_file1';" Note: Second import will import the data without any error as table available now Thanks Kumar
... View more
11-17-2016
09:09 PM
You can try this and let me know... Note: If you are using cloudera then use /user/hive/warehouse/ in case of Hortonworks distribution then replace it as follows: /apps/hive/warehouse/ Step1: Run the below command to see the file available in your hive table. Replace mydb with your dbname and mytable with your tablename # hadoop fs -ls /user/hive/warehouse/mydb.db/mytable Step2: Ex: if the above commands returns as follows: /user/hive/warehouse/mydb.db/mytable/000000_0 then run the below command. it will return last 10 records from your file. # hadoop fs -tail /user/hive/warehouse/mydb.db/mytable/000000_0 Step3: You mentioned that you have create script. so compare the list of column from create script with one record from the above list Step4: Make sure columns in create script matches with data (especially the problematic column lbs_retail_celleliste_jakntest) Thanks Kumar
... View more
11-16-2016
08:33 AM
$CONDITIONS to be appended with query. it is sqoop syntax and not related to -num-mappers. Ex: select col1, col2 from table1 where $CONDITIONS it seems you are also missing --split-by columns... it is also recommanded/mandatory for import Thanks Kumar
... View more
11-16-2016
07:34 AM
You can break your sqoop steps using \ but you should not break your query. Also 'where $CONDITIONS' is mandatory and $CONDITIONS should be in upper case. Pls do the below correction and try again Currection version (only few steps covered):
> --username xxx \
> --password yyy \
> --query 'SELECT citi.id, \
> country.name, \
> citi.city \
> FROM citi \
> JOIN country USING(country_id) \
> --num-mappers 1 \
Updated version (only few steps covered):
> --username xxx \
> --password yyy \
> --query 'SELECT citi.id, country.name, citi.city FROM citi JOIN country USING(country_id) where $CONDITIONS' \
> --num-mappers 1 \
... View more
11-14-2016
01:49 PM
-m 1 is the correct one (do not use -m1). But as per your code you are using -m 1 along with {{cluster_data.worker_node_hostname.length}} remove {{cluster_data.worker_node_hostname.length}} and try again only with -m 1 also i would recommand to practice with import instead of import-all-tables. Becuase you can customize "-m {{cluster_data.worker_node_hostname.length}}" with different size based on your file/table size (like -m 2, -m 3, etc)
... View more
11-14-2016
12:25 PM
We are getting the same issue after upgrade from CDH 5.2 to CDH 5.7 Looks like this is an issue with CDH 5.4.x and above (not sure the same issue occures for those who are directly installing CDH 5.4.x or above without upgrade from lower version) I don't want to shut off this alert. Pls let me know if there is any other way to fix this issue Thanks Kumar
... View more
11-14-2016
10:25 AM
Pls try this and see any luck ANALYZE TABLE Table_Name COMPUTE STATISTICS;
... View more
11-03-2016
06:20 PM
4 Kudos
If you need more details, pls refer below mapred.map.child.java.opts is for Hadoop 1.x Those who are using Hadoop 2.x, pls use the below parameters instead mapreduce.map.java.opts=-Xmx4g # Note: 4 GB mapreduce.reduce.java.opts=-Xmx4g # Note: 4 GB Also when you set java.opts, you need to note two important points 1. It has dependency on memory.mb, so always try to set java.opts upto 80% of memory.mb 2. Follow the "-Xmx4g" format for opt but numerical value for memory.mb mapreduce.map.memory.mb = 5012 # Note: 5 GB mapreduce.reduce.memory.mb = 5012 # Note: 5 GB Finally, some organization will not allow you to alter mapred-site.xml directly or via CM. Also we need thease kind of setup only to handle very big tables, so it is not recommanded to alter the configuration only for few tables..so you can do this setup temporarly by following below steps: 1. From HDFS: HDFS> export HIVE_OPTS="-hiveconf mapreduce.map.memory.mb=5120 -hiveconf mapreduce.reduce.memory.mb=5120 -hiveconf mapreduce.map.java.opts=-Xmx4g -hiveconf mapreduce.reduce.java.opts=-Xmx4g" 2. From Hive: hive> set mapreduce.map.memory.mb=5120; hive> set mapreduce.reduce.memory.mb=5120; hive> set mapreduce.map.java.opts=-Xmx4g; hive> set mapreduce.reduce.java.opts=-Xmx4g; Note: HIVE_OPTS is to handle only HIVE, if you need similar setup for HADOOP then use HADOOP_OPTS Thanks Kumar
... View more
- « Previous
- Next »